Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update index.qmd #548

Merged
merged 9 commits into from
Nov 26, 2024
10 changes: 5 additions & 5 deletions tutorials/docs-10-using-turing-autodiff/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ As of Turing version v0.30, the global configuration flag for the AD backend has
Users can pass the `adtype` keyword argument to the sampler constructor to select the desired AD backend, with the default being `AutoForwardDiff(; chunksize=0)`.

For `ForwardDiff`, pass `adtype=AutoForwardDiff(; chunksize)` to the sampler constructor. A `chunksize` of 0 permits the chunk size to be automatically determined. For more information regarding the selection of `chunksize`, please refer to [related section of `ForwardDiff`'s documentation](https://juliadiff.org/ForwardDiff.jl/dev/user/advanced/#Configuring-Chunk-Size).
willtebbutt marked this conversation as resolved.
Show resolved Hide resolved
For `ReverseDiff`, pass `adtype=AutoReverseDiff()` to the sampler constructor. An additional argument can be provided to `AutoReverseDiff` to specify whether to to compile the tape only once and cache it for later use (`false` by default, which means no caching tape). Be aware that the use of caching in certain types of models can lead to incorrect results and/or errors.
For `ReverseDiff`, pass `adtype=AutoReverseDiff()` to the sampler constructor. An additional argument can be provided to `AutoReverseDiff` to specify whether to to cache the tape only once and reuse it later use (`false` by default, which means no caching tape). This can substantially improve performance, but risks silently incorrect results if not used with care.
willtebbutt marked this conversation as resolved.
Show resolved Hide resolved

Compiled tapes should only be used if you are absolutely certain that the computation doesn't change between different executions of your model.
Thus, e.g., in the model definition and all im- and explicitly called functions in the model all loops should be of fixed size, and `if`-statements should consistently execute the same branches.
For instance, `if`-statements with conditions that can be determined at compile time or conditions that depend only on the data will always execute the same branches during sampling (if the data is constant throughout sampling and, e.g., no mini-batching is used).
Cached tapes should only be used if you are absolutely certain that the sequence of operations performed in your code does not change between different executions of your model.
willtebbutt marked this conversation as resolved.
Show resolved Hide resolved
Thus, e.g., in the model definition and all implicitly and explicitly called functions in the model, all loops should be of fixed size, and `if`-statements should consistently execute the same branches.
For instance, `if`-statements with conditions that can be determined at compile time or conditions that depend only on fixed properties of the data will always execute the same branches during sampling (if the data is constant throughout sampling and, e.g., no mini-batching is used).
willtebbutt marked this conversation as resolved.
Show resolved Hide resolved
However, `if`-statements that depend on the model parameters can take different branches during sampling; hence, the compiled tape might be incorrect.
Thus you must not use compiled tapes when your model makes decisions based on the model parameters, and you should be careful if you compute functions of parameters that those functions do not have branching which might cause them to execute different code for different values of the parameter.

Expand Down Expand Up @@ -62,4 +62,4 @@ c = sample(

Generally, reverse-mode AD, for instance `ReverseDiff`, is faster when sampling from variables of high dimensionality (greater than 20), while forward-mode AD, for instance `ForwardDiff`, is more efficient for lower-dimension variables. This functionality allows those who are performance sensitive to fine tune their automatic differentiation for their specific models.

If the differentiation method is not specified in this way, Turing will default to using whatever the global AD backend is. Currently, this defaults to `ForwardDiff`.
If the differentiation method is not specified in this way, Turing will default to using whatever the global AD backend is. Currently, this defaults to `ForwardDiff`.
Loading