diff --git a/README.md b/README.md
index dc3d31b5..f96e6e70 100644
--- a/README.md
+++ b/README.md
@@ -6,21 +6,21 @@
[![Stable](https://img.shields.io/badge/docs-stable-blue.svg)](https://turinglang.github.io/AdvancedHMC.jl/stable/)
[![Dev](https://img.shields.io/badge/docs-dev-blue.svg)](https://turinglang.github.io/AdvancedHMC.jl/dev/)
-AdvancedHMC.jl provides a robust, modular and efficient implementation of advanced HMC algorithms. An illustrative example for AdvancedHMC's usage is given below. AdvancedHMC.jl is part of [Turing.jl](https://github.com/TuringLang/Turing.jl), a probabilistic programming library in Julia.
+AdvancedHMC.jl provides a robust, modular, and efficient implementation of advanced HMC algorithms. An illustrative example of AdvancedHMC's usage is given below. AdvancedHMC.jl is part of [Turing.jl](https://github.com/TuringLang/Turing.jl), a probabilistic programming library in Julia.
If you are interested in using AdvancedHMC.jl through a probabilistic programming language, please check it out!
**Interfaces**
- [`IMP.hmc`](https://github.com/salilab/hmc): an experimental Python module for the Integrative Modeling Platform, which uses AdvancedHMC in its backend to sample protein structures.
**NEWS**
-- We presented a paper for AdvancedHMC.jl at [AABI](http://approximateinference.org/) 2019 in Vancouver, Canada. ([abs](http://proceedings.mlr.press/v118/xu20a.html), [pdf](http://proceedings.mlr.press/v118/xu20a/xu20a.pdf), [OpenReview](https://openreview.net/forum?id=rJgzckn4tH))
+- We presented a paper for AdvancedHMC.jl at [AABI](http://approximateinference.org/) in 2019 in Vancouver, Canada. ([abs](http://proceedings.mlr.press/v118/xu20a.html), [pdf](http://proceedings.mlr.press/v118/xu20a/xu20a.pdf), [OpenReview](https://openreview.net/forum?id=rJgzckn4tH))
- We presented a poster for AdvancedHMC.jl at [StanCon 2019](https://mc-stan.org/events/stancon2019Cambridge/) in Cambridge, UK. ([pdf](https://github.com/TuringLang/AdvancedHMC.jl/files/3730367/StanCon-AHMC.pdf))
**API CHANGES**
-- [v0.5.0] **Breaking!** Convinience constructors for common samplers changed to:
- - `HMC(init_ϵ::Float64=init_ϵ, n_leapfrog::Int=n_leapfrog)`
- - `NUTS(n_adapts::Int=n_adapts, δ::Float64=δ)`
- - `HMCDA(n_adapts::Int=n_adapts, δ::Float64=δ, λ::Float64=λ)`
+- [v0.5.0] **Breaking!** Convenience constructors for common samplers changed to:
+ - `HMC(n_leapfrog)`
+ - `NUTS(target_acceptance)`
+ - `HMCDA(target_acceptance, integration_time)`
- [v0.2.22] Three functions are renamed.
- `Preconditioner(metric::AbstractMetric)` -> `MassMatrixAdaptor(metric)` and
- `NesterovDualAveraging(δ, integrator::AbstractIntegrator)` -> `StepSizeAdaptor(δ, integrator)`
@@ -33,24 +33,24 @@ If you are interested in using AdvancedHMC.jl through a probabilistic programmin
## A minimal example - sampling from a multivariate Gaussian using NUTS
-In this section we demonstrate a minimal example of sampling from a multivariate Gaussian (10 dimensional) using the no U-turn sampler (NUTS). Below we describe the major components of the Hamiltonian system which are essential to sample using this approach:
+This section demonstrates a minimal example of sampling from a multivariate Gaussian (10-dimensional) using the no U-turn sampler (NUTS). Below we describe the major components of the Hamiltonian system which are essential to sample using this approach:
-- **Metric**: In many sampling problems the sample space is usually associated with a metric, that allows us to measure the distance between any two points, and other similar quantities. In the example in this section, we use a special metric called the **Euclidean Metric**, represented with a `D × D` matrix from which we can compute distances.
+- **Metric**: In many sampling problems the sample space is usually associated with a metric that allows us to measure the distance between any two points, and other similar quantities. In the example in this section, we use a special metric called the **Euclidean Metric**, represented with a `D × D` matrix from which we can compute distances.
Further details about the Metric component
The Euclidean metric is also known as the mass matrix in the physical perspective. For available metrics refer Hamiltonian mass matrix.
-- **Leapfrog integration**: Leapfrog integration is a second-order numerical method for integrating differential equations (In this case they are, equations of motion for the relative position of one particle with respect to the other). The order of this integration signifies its rate of convergence. Any alogrithm with a finite time step size will have numerical errors and the order is related to this error. For a second-order algorithm, this error scales as the second power of the time step, hence, the name second-order. High-order intergrators are usually complex to code and have a limited region of convergence, hence they do not allow arbitrarily large time steps. A second-order integrator is suitable for our purpose, hence we opt for the leapfrog integrator. It is called `leapfrog` due to the ways this algorithm is written, where the positions and velocities of particles `leap over` each other.
+- **Leapfrog integration**: Leapfrog integration is a second-order numerical method for integrating differential equations (In this case they are equations of motion for the relative position of one particle with respect to the other). The order of this integration signifies its rate of convergence. Any algorithm with a finite time step size will have numerical errors, and the order is related to this error. For a second-order algorithm, this error scales as the second power of the time step, hence, the name second-order. High-order integrators are usually complex to code and have a limited region of convergence; hence they do not allow arbitrarily large time steps. A second-order integrator is suitable for our purpose. Hence we opt for the leapfrog integrator. It is called `leapfrog` due to the ways this algorithm is written, where the positions and velocities of particles `leap over` each other.
About the leapfrog integration scheme
- Suppose ${\bf x}$ and ${\bf v}$ are the position and velocity of an individual particle respectively; $i$ and $i+1$ are the indices for time values $t_i$ and $t_{i+1}$ respectively; $dt = t_{i+1} - t_i$ is the time step size (constant and regularly spaced intervals); and ${\bf a}$ is the acceleration induced on a particle by the forces of all other particles. Furthermore, suppose positions are defined at times $t_i, t_{i+1}, t_{i+2}, \dots $, spaced at constant intervals $dt$, the velocities are defined at halfway times in between, denoted by $t_{i-1/2}, t_{i+1/2}, t_{i+3/2}, \dots $, where $t_{i+1} - t_{i + 1/2} = t_{i + 1/2} - t_i = dt / 2$, and the accelerations ${\bf a}$ are defined only on integer times, just like the positions. Then the leapfrog integration scheme is given as: $x_{i} = x_{i-1} + v_{i-1/2} dt; \quad v_{i+1/2} = v_{i-1/2} + a_i dt$. For available integrators refer Integrator.
+ Suppose ${\bf x}$ and ${\bf v}$ are the position and velocity of an individual particle respectively; $i$ and $i+1$ are the indices for time values $t_i$ and $t_{i+1}$ respectively; $dt = t_{i+1} - t_i$ is the time step size (constant and regularly spaced intervals), and ${\bf a}$ is the acceleration induced on a particle by the forces of all other particles. Furthermore, suppose positions are defined at times $t_i, t_{i+1}, t_{i+2}, \dots $, spaced at constant intervals $dt$, the velocities are defined at halfway times in between, denoted by $t_{i-1/2}, t_{i+1/2}, t_{i+3/2}, \dots $, where $t_{i+1} - t_{i + 1/2} = t_{i + 1/2} - t_i = dt / 2$, and the accelerations ${\bf a}$ are defined only on integer times, just like the positions. Then the leapfrog integration scheme is given as: $x_{i} = x_{i-1} + v_{i-1/2} dt; \quad v_{i+1/2} = v_{i-1/2} + a_i dt$. For available integrators refer Integrator.
-- **Proposal for trajectories (static or dynamic)**: Different types of proposals can be used, which maybe static or dynamic. At each iteration of any variant of the HMC algorithm there are two main steps - the first step changes the momentum and the second step may change both the position and the momentum of a particle.
+- **Kernel for trajectories (static or dynamic)**: Different kernels, which may be static or dynamic, can be used. At each iteration of any variant of the HMC algorithm, there are two main steps - the first step changes the momentum and the second step may change both the position and the momentum of a particle.
- More about the proposals
- In the classical HMC approach, during the first step, new values for the momentum variables are randomly drawn from their Gaussian distribution, independently of the current values of the position variables. Whereas, during the second step, a Metropolis update is performed, using Hamiltonian dynamics to provide a new state. For available proposals refer Proposal.
+ More about the kernels
+ In the classical HMC approach, during the first step, new values for the momentum variables are randomly drawn from their Gaussian distribution, independently of the current values of the position variables. A Metropolis update is performed during the second step, using Hamiltonian dynamics to provide a new state. For available kernels refer kernel.
```julia
@@ -77,21 +77,21 @@ n_samples, n_adapts = 2_000, 1_000
metric = DiagEuclideanMetric(D)
hamiltonian = Hamiltonian(metric, ℓπ, ForwardDiff)
-# Define a leapfrog solver, with initial step size chosen heuristically
+# Define a leapfrog solver, with the initial step size chosen heuristically
initial_ϵ = find_good_stepsize(hamiltonian, initial_θ)
integrator = Leapfrog(initial_ϵ)
-# Define an HMC sampler, with the following components
+# Define an HMC sampler with the following components
# - multinomial sampling scheme,
# - generalised No-U-Turn criteria, and
# - windowed adaption for step-size and diagonal mass matrix
-proposal = NUTS{MultinomialTS, GeneralisedNoUTurn}(integrator)
+kernel = HMCKernel(Trajectory{MultinomialTS}(integrator, GeneralisedNoUTurn()))
adaptor = StanHMCAdaptor(MassMatrixAdaptor(metric), StepSizeAdaptor(0.8, integrator))
# Run the sampler to draw samples from the specified Gaussian, where
# - `samples` will store the samples
# - `stats` will store diagnostic statistics for each sample
-samples, stats = sample(hamiltonian, proposal, initial_θ, n_samples, adaptor, n_adapts; progress=true)
+samples, stats = sample(hamiltonian, kernel, initial_θ, n_samples, adaptor, n_adapts; progress=true)
```
### Parallel sampling
@@ -102,7 +102,7 @@ It also supports vectorized sampling for static HMC and has been discussed in mo
The below example utilizes the `@threads` macro to sample 4 chains across 4 threads.
```julia
-# Ensure that julia was launched with appropriate number of threads
+# Ensure that Julia was launched with an appropriate number of threads
println(Threads.nthreads())
# Number of chains to sample
@@ -114,21 +114,130 @@ chains = Vector{Any}(undef, nchains)
# The `samples` from each parallel chain is stored in the `chains` vector
# Adjust the `verbose` flag as per need
Threads.@threads for i in 1:nchains
- samples, stats = sample(hamiltonian, proposal, initial_θ, n_samples, adaptor, n_adapts; verbose=false)
+ samples, stats = sample(hamiltonian, kernel, initial_θ, n_samples, adaptor, n_adapts; verbose=false)
chains[i] = samples
end
```
+### Using the `AbstractMCMC` interface
+
+Users can also use the `AbstractMCMC` interface to sample, which is also used in Turing.jl.
+In order to show how this is done let us start from our previous example where we defined a `LogTargetDensity`, `ℓπ`.
+
+```julia
+# Wrap the previous LogTargetDensity as LogDensityModel
+# where ℓπ::LogTargetDensity
+model = AdvancedHMC.LogDensityModel(LogDensityProblemsAD.ADgradient(Val(:ForwardDiff), ℓπ))
+
+# Wrap the previous sampler as a HMCSampler <: AbstractMCMC.AbstractSampler
+D = 10; initial_θ = rand(D)
+n_samples, n_adapts, δ = 1_000, 2_000, 0.8
+sampler = HMCSampler(kernel, metric, adaptor)
+
+# Now just sample
+samples = AbstractMCMC.sample(
+ model,
+ sampler,
+ n_adapts + n_samples;
+ nadapts = n_adapts,
+ init_params = initial_θ,
+ )
+```
+
+### Convenience Constructors
+
+In the previous examples, we built the sampler by manually specifying the integrator, metric, kernel, and adaptor to build our own sampler. However, in many cases, users might want to initialize a standard NUTS sampler. In such cases having to define each of these aspects manually is tedious and error-prone. For these reasons `AdvancedHMC` also provides users with a series of convenience constructors for standard samplers. We will now show how to use them.
+
+- HMC:
+ ```julia
+ # HMC Sampler
+ # step size, number of leapfrog steps
+ n_leapfrogs, lf_integrator = 0.25, Leapfrog(0.1)
+ hmc = HMC(n_leapfrogs, integrator = lf_integrator)
+ ```
+
+ Equivalent to:
+
+ ```julia
+ metric = DiagEuclideanMetric(D)
+ hamiltonian = Hamiltonian(metric, ℓπ, ForwardDiff)
+ integrator = Leapfrog(0.1)
+ kernel = HMCKernel(Trajectory{EndPointTS}(integrator, FixedNSteps(n_leapfrog)))
+ adaptor = NoAdaptation()
+ hmc = HMCSampler(kernel, metric, adaptor)
+ ```
+
+- NUTS:
+ ```julia
+ # NUTS Sampler
+ # adaptation steps, target acceptance probability,
+ δ = 0.8
+ nuts = NUTS(δ)
+ ```
+
+ Equivalent to:
+
+ ```julia
+ metric = DiagEuclideanMetric(D)
+ hamiltonian = Hamiltonian(metric, ℓπ, ForwardDiff)
+ initial_ϵ = find_good_stepsize(hamiltonian, initial_θ)
+ integrator = Leapfrog(initial_ϵ)
+ kernel = HMCKernel(Trajectory{MultinomialTS}(integrator, GeneralisedNoUTurn()))
+ adaptor = StanHMCAdaptor(MassMatrixAdaptor(metric), StepSizeAdaptor(δ, integrator))
+ nuts = HMCSampler(kernel, metric, adaptor)
+ ```
+
+
+- HMCDA:
+ ```julia
+ #HMCDA (dual averaging)
+ # adaptation steps, target acceptance probability, target trajectory length
+ δ, λ = 0.8, 1.0
+ hmcda = HMCDA(δ, λ)
+ ```
+
+ Equivalent to:
+
+ ```julia
+ metric = DiagEuclideanMetric(D)
+ hamiltonian = Hamiltonian(metric, ℓπ, ForwardDiff)
+ initial_ϵ = find_good_stepsize(hamiltonian, initial_θ)
+ integrator = Leapfrog(initial_ϵ)
+ kernel = HMCKernel(Trajectory{EndPointTS}(integrator, FixedIntegrationTime(λ)))
+ adaptor = StanHMCAdaptor(MassMatrixAdaptor(metric), StepSizeAdaptor(δ, integrator))
+ hmcda = HMCSampler(kernel, metric, adaptor)
+ ```
+
+Moreover, there's some flexibility in how these samplers can be initialized.
+For example, a user can initialize a NUTS (HMC and HMCDA) sampler with their own metrics and integrators.
+This can be done as follows:
+ ```julia
+ nuts = NUTS(δ, metric = :diagonal) #metric = DiagEuclideanMetric(D) (Default!)
+ nuts = NUTS(δ, metric = :unit) #metric = UnitEuclideanMetric(D)
+ nuts = NUTS(δ, metric = :dense) #metric = DenseEuclideanMetric(D)
+ # Provide your own AbstractMetric
+ metric = DiagEuclideanMetric(10)
+ nuts = NUTS(n_adapt, δ, metric = metric)
+
+ nuts = NUTS(δ, integrator = :leapfrog) #integrator = Leapfrog(ϵ) (Default!)
+ nuts = NUTS(δ, integrator = :jitteredleapfrog) #integrator = JitteredLeapfrog(ϵ, 0.1ϵ)
+ nuts = NUTS(δ, integrator = :temperedleapfrog) #integrator = TemperedLeapfrog(ϵ, 1.0)
+
+ # Provide your own AbstractIntegrator
+ integrator = JitteredLeapfrog(ϵ, 0.2ϵ)
+ nuts = NUTS(δ, integrator = integrator)
+ ```
+
### GPU Sampling with CUDA
There is experimental support for running static HMC on the GPU using CUDA.
-To do so the user needs to have [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl) installed, ensure the logdensity of the `Hamiltonian` can be executed on the GPU and that the initial points are a `CuArray`.
+To do so, the user needs to have [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl) installed, ensure the logdensity of the `Hamiltonian` can be executed on the GPU and that the initial points are a `CuArray`.
A small working example can be found at `test/cuda.jl`.
## API and supported HMC algorithms
An important design goal of AdvancedHMC.jl is modularity; we would like to support algorithmic research on HMC.
-This modularity means that different HMC variants can be easily constructed by composing various components, such as preconditioning metric (i.e. mass matrix), leapfrog integrators, trajectories (static or dynamic), and adaption schemes etc.
+This modularity means that different HMC variants can be easily constructed by composing various components, such as preconditioning metric (i.e., mass matrix), leapfrog integrators, trajectories (static or dynamic), and adaption schemes, etc.
The minimal example above can be modified to suit particular inference problems by picking components from the list below.
### Hamiltonian mass matrix (`metric`)
@@ -147,14 +256,14 @@ where `dim` is the dimensionality of the sampling space.
where `ϵ` is the step size of leapfrog integration.
-### Proposal (`proposal`)
+### Kernel (`kernel`)
-- Static HMC with a fixed number of steps (`n_steps`) (Neal, R. M. (2011)): `StaticTrajectory(integrator, n_steps)`
-- HMC with a fixed total trajectory length (`trajectory_length`) (Neal, R. M. (2011)): `HMCDA(integrator, trajectory_length)`
-- Original NUTS with slice sampling (Hoffman, M. D., & Gelman, A. (2014)): `NUTS{SliceTS,ClassicNoUTurn}(integrator)`
-- Generalised NUTS with slice sampling (Betancourt, M. (2017)): `NUTS{SliceTS,GeneralisedNoUTurn}(integrator)`
-- Original NUTS with multinomial sampling (Betancourt, M. (2017)): `NUTS{MultinomialTS,ClassicNoUTurn}(integrator)`
-- Generalised NUTS with multinomial sampling (Betancourt, M. (2017)): `NUTS{MultinomialTS,GeneralisedNoUTurn}(integrator)`
+- Static HMC with a fixed number of steps (`n_steps`) (Neal, R. M. (2011)): `HMCKernel(Trajectory{EndPointTS}(integrator, FixedNSteps(integrator)))`
+- HMC with a fixed total trajectory length (`trajectory_length`) (Neal, R. M. (2011)): `HMCKernel(Trajectory{EndPointTS}(integrator, FixedIntegrationTime(trajectory_length)))`
+- Original NUTS with slice sampling (Hoffman, M. D., & Gelman, A. (2014)): `HMCKernel(Trajectory{SliceTS}(integrator, ClassicNoUTurn()))`
+- Generalised NUTS with slice sampling (Betancourt, M. (2017)): `HMCKernel(Trajectory{SliceTS}(integrator, GeneralisedNoUTurn()))`
+- Original NUTS with multinomial sampling (Betancourt, M. (2017)): `HMCKernel(Trajectory{MultinomialTS}(integrator, ClassicNoUTurn()))`
+- Generalised NUTS with multinomial sampling (Betancourt, M. (2017)): `HMCKernel(Trajectory{MultinomialTS}(integrator, GeneralisedNoUTurn()))`
### Adaptor (`adaptor`)
@@ -166,9 +275,9 @@ where `ϵ` is the step size of leapfrog integration.
- Combine the first two using Stan's windowed adaptation: `StanHMCAdaptor(mma, ssa)`
### Gradients
-`AdvancedHMC` supports both AD-based (`Zygote`, `Tracker` and `ForwardDiff`) and user-specified gradients. In order to use user-specified gradients, please replace `ForwardDiff` with `ℓπ_grad` in the `Hamiltonian` constructor, where the gradient function `ℓπ_grad` should return a tuple containing both the log-posterior and its gradient.
+`AdvancedHMC` supports AD-based using [`LogDensityProblemsAD`](https://github.com/tpapp/LogDensityProblemsAD.jl) and user-specified gradients. In order to use user-specified gradients, please replace `ForwardDiff` with `ℓπ_grad` in the `Hamiltonian` constructor, where the gradient function `ℓπ_grad` should return a tuple containing both the log-posterior and its gradient.
-All the combinations are tested in [this file](https://github.com/TuringLang/AdvancedHMC.jl/blob/master/test/sampler.jl) except from using tempered leapfrog integrator together with adaptation, which we found unstable empirically.
+All the combinations are tested in [this file](https://github.com/TuringLang/AdvancedHMC.jl/blob/master/test/sampler.jl) except for using tempered leapfrog integrator together with adaptation, which we found unstable empirically.
## The `sample` function signature in detail
@@ -187,7 +296,7 @@ function sample(
)
```
-Draw `n_samples` samples using the proposal `κ` under the Hamiltonian system `h`
+Draw `n_samples` samples using the kernel `κ` under the Hamiltonian system `h`
- The randomness is controlled by `rng`.
- If `rng` is not provided, `GLOBAL_RNG` will be used.
diff --git a/src/abstractmcmc.jl b/src/abstractmcmc.jl
index 9af6a78c..bdb27b5b 100644
--- a/src/abstractmcmc.jl
+++ b/src/abstractmcmc.jl
@@ -28,8 +28,6 @@ end
getadaptor(state::HMCState) = state.adaptor
getmetric(state::HMCState) = state.metric
-
-getintegrator(state::HMCState) = state.κ.τ.integrator
getintegrator(state::HMCState) = state.κ.τ.integrator
"""
@@ -271,47 +269,63 @@ end
#########
+function make_step_size(
+ rng::Random.AbstractRNG,
+ spl::HMCSampler,
+ hamiltonian::Hamiltonian,
+ init_params,
+)
+ return spl.κ.τ.integrator.ϵ
+end
+
function make_step_size(
rng::Random.AbstractRNG,
spl::AbstractHMCSampler,
hamiltonian::Hamiltonian,
init_params,
)
- ϵ = spl.init_ϵ
- if iszero(ϵ)
- ϵ = find_good_stepsize(rng, hamiltonian, init_params)
- T = sampler_eltype(spl)
- ϵ = T(ϵ)
- @info string("Found initial step size ", ϵ)
- end
- return ϵ
+ T = sampler_eltype(spl)
+ return make_step_size(rng, spl.integrator, T, hamiltonian, init_params)
+
end
function make_step_size(
rng::Random.AbstractRNG,
- spl::HMCSampler,
+ integrator::AbstractIntegrator,
+ T::Type,
hamiltonian::Hamiltonian,
init_params,
)
- return spl.κ.τ.integrator.ϵ
+ return integrator.ϵ
+end
+
+function make_step_size(
+ rng::Random.AbstractRNG,
+ integrator::Symbol,
+ T::Type,
+ hamiltonian::Hamiltonian,
+ init_params,
+)
+ ϵ = find_good_stepsize(rng, hamiltonian, init_params)
+ @info string("Found initial step size ", ϵ)
+ return T(ϵ)
end
make_integrator(spl::HMCSampler, ϵ::Real) = spl.κ.τ.integrator
make_integrator(spl::AbstractHMCSampler, ϵ::Real) = make_integrator(spl.integrator, ϵ)
make_integrator(i::AbstractIntegrator, ϵ::Real) = i
-make_integrator(i::Type{<:AbstractIntegrator}, ϵ::Real) = i
make_integrator(i::Symbol, ϵ::Real) = make_integrator(Val(i), ϵ)
-make_integrator(i...) = error("Integrator $(typeof(i)) not supported.")
+make_integrator(@nospecialize(i), ::Real) = error("Integrator $i not supported.")
make_integrator(i::Val{:leapfrog}, ϵ::Real) = Leapfrog(ϵ)
-make_integrator(i::Val{:jitteredleapfrog}, ϵ::Real) = JitteredLeapfrog(ϵ)
-make_integrator(i::Val{:temperedleapfrog}, ϵ::Real) = TemperedLeapfrog(ϵ)
+make_integrator(i::Val{:jitteredleapfrog}, ϵ::T) where {T<:Real} =
+ JitteredLeapfrog(ϵ, T(0.1ϵ))
+make_integrator(i::Val{:temperedleapfrog}, ϵ::T) where {T<:Real} = TemperedLeapfrog(ϵ, T(1))
#########
-make_metric(i...) = error("Metric $(typeof(i)) not supported.")
+make_metric(@nospecialize(i), T::Type, d::Int) = error("Metric $(typeof(i)) not supported.")
make_metric(i::Symbol, T::Type, d::Int) = make_metric(Val(i), T, d)
make_metric(i::AbstractMetric, T::Type, d::Int) = i
-make_metric(i::Type{AbstractMetric}, T::Type, d::Int) = i
make_metric(i::Val{:diagonal}, T::Type, d::Int) = DiagEuclideanMetric(T, d)
make_metric(i::Val{:unit}, T::Type, d::Int) = UnitEuclideanMetric(T, d)
make_metric(i::Val{:dense}, T::Type, d::Int) = DenseEuclideanMetric(T, d)
diff --git a/src/constructors.jl b/src/constructors.jl
index 3c09b980..f2238224 100644
--- a/src/constructors.jl
+++ b/src/constructors.jl
@@ -28,20 +28,18 @@ struct HMCSampler{T<:Real} <: AbstractHMCSampler{T}
metric::AbstractMetric
"[`AbstractAdaptor`](@ref)."
adaptor::AbstractAdaptor
- "Adaptation steps if any"
- n_adapts::Int
end
-function HMCSampler(κ, metric, adaptor; n_adapts = 0)
+function HMCSampler(κ, metric, adaptor)
T = collect(typeof(metric).parameters)[1]
- return HMCSampler{T}(κ, metric, adaptor, n_adapts)
+ return HMCSampler{T}(κ, metric, adaptor)
end
############
### NUTS ###
############
"""
- NUTS(n_adapts::Int, δ::Real; max_depth::Int=10, Δ_max::Real=1000, init_ϵ::Real=0)
+ NUTS(δ::Real; max_depth::Int=10, Δ_max::Real=1000, init_ϵ::Real=0, integrator = :leapfrog, metric = :diagonal)
No-U-Turn Sampler (NUTS) sampler.
@@ -52,7 +50,7 @@ $(FIELDS)
# Usage:
```julia
-NUTS(n_adapts=1000, δ=0.65) # Use 1000 adaption steps, and target accept ratio 0.65.
+NUTS(δ=0.65) # Use target accept ratio 0.65.
```
"""
struct NUTS{T<:Real} <: AbstractHMCSampler{T}
@@ -62,24 +60,15 @@ struct NUTS{T<:Real} <: AbstractHMCSampler{T}
max_depth::Int
"Maximum divergence during doubling tree."
Δ_max::T
- "Initial step size; 0 means it is automatically chosen."
- init_ϵ::T
"Choice of integrator, specified either using a `Symbol` or [`AbstractIntegrator`](@ref)"
integrator::Union{Symbol,AbstractIntegrator}
- "Choice of initial metric, specified using a `Symbol` or `AbstractMetric`. The metric type will be preserved during adaption."
+ "Choice of initial metric; `Symbol` means it is automatically initialised. The metric type will be preserved during automatic initialisation and adaption."
metric::Union{Symbol,AbstractMetric}
end
-function NUTS(
- δ;
- max_depth = 10,
- Δ_max = 1000.0,
- init_ϵ = 0.0,
- integrator = :leapfrog,
- metric = :diagonal,
-)
+function NUTS(δ; max_depth = 10, Δ_max = 1000.0, integrator = :leapfrog, metric = :diagonal)
T = typeof(δ)
- return NUTS(δ, max_depth, T(Δ_max), T(init_ϵ), integrator, metric)
+ return NUTS(δ, max_depth, T(Δ_max), integrator, metric)
end
###########
@@ -97,29 +86,32 @@ $(FIELDS)
# Usage:
```julia
-HMC(init_ϵ=0.05, n_leapfrog=10)
+HMC(10, integrator = Leapfrog(0.05), metric = :diagonal)
```
"""
struct HMC{T<:Real} <: AbstractHMCSampler{T}
- "Initial step size; 0 means automatically searching using a heuristic procedure."
- init_ϵ::T
"Number of leapfrog steps."
n_leapfrog::Int
"Choice of integrator, specified either using a `Symbol` or [`AbstractIntegrator`](@ref)"
integrator::Union{Symbol,AbstractIntegrator}
- "Choice of initial metric, specified using a `Symbol` or `AbstractMetric`. The metric type will be preserved during adaption."
+ "Choice of initial metric; `Symbol` means it is automatically initialised. The metric type will be preserved during automatic initialisation and adaption."
metric::Union{Symbol,AbstractMetric}
end
-function HMC(init_ϵ, n_leapfrog; integrator = :leapfrog, metric = :diagonal)
- return HMC(init_ϵ, n_leapfrog, integrator, metric)
+function HMC(n_leapfrog; integrator = :leapfrog, metric = :diagonal)
+ if integrator isa Symbol
+ T = typeof(0.0) # current default float type
+ else
+ T = integrator_eltype(integrator)
+ end
+ return HMC{T}(n_leapfrog, integrator, metric)
end
#############
### HMCDA ###
#############
"""
- HMCDA(n_adapts::Int, δ::Real, λ::Real; ϵ::Real=0)
+ HMCDA(δ::Real, λ::Real; ϵ::Real=0, integrator = :leapfrog, metric = :diagonal)
Hamiltonian Monte Carlo sampler with Dual Averaging algorithm.
@@ -130,7 +122,7 @@ $(FIELDS)
# Usage:
```julia
-HMCDA(n_adapts=200, δ=0.65, λ=0.3)
+HMCDA(δ=0.65, λ=0.3)
```
For more information, please view the following paper ([arXiv link](https://arxiv.org/abs/1111.4246)):
@@ -144,16 +136,14 @@ struct HMCDA{T<:Real} <: AbstractHMCSampler{T}
δ::T
"Target leapfrog length."
λ::T
- "Initial step size; 0 means automatically searching using a heuristic procedure."
- init_ϵ::T
"Choice of integrator, specified either using a `Symbol` or [`AbstractIntegrator`](@ref)"
integrator::Union{Symbol,AbstractIntegrator}
- "Choice of initial metric, specified using a `Symbol` or `AbstractMetric`. The metric type will be preserved during adaption."
+ "Choice of initial metric; `Symbol` means it is automatically initialised. The metric type will be preserved during automatic initialisation and adaption."
metric::Union{Symbol,AbstractMetric}
end
function HMCDA(δ, λ; init_ϵ = 0, integrator = :leapfrog, metric = :diagonal)
δ, λ = promote(δ, λ)
T = typeof(δ)
- return HMCDA(δ, T(λ), T(init_ϵ), integrator, metric)
+ return HMCDA(δ, T(λ), integrator, metric)
end
diff --git a/src/integrator.jl b/src/integrator.jl
index 8391881f..5dd69ac4 100644
--- a/src/integrator.jl
+++ b/src/integrator.jl
@@ -70,6 +70,7 @@ struct Leapfrog{T<:AbstractScalarOrVec{<:AbstractFloat}} <: AbstractLeapfrog{T}
ϵ::T
end
Base.show(io::IO, l::Leapfrog) = print(io, "Leapfrog(ϵ=$(round.(l.ϵ; sigdigits=3)))")
+integrator_eltype(i::AbstractLeapfrog{T}) where {T<:AbstractFloat} = T
### Jittering
@@ -131,7 +132,7 @@ function _jitter(
lf::JitteredLeapfrog{FT,T},
) where {FT<:AbstractFloat,T<:AbstractScalarOrVec{FT}}
ϵ = lf.ϵ0 .* (1 .+ lf.jitter .* (2 .* rand(rng) .- 1))
- return @set lf.ϵ = ϵ
+ return @set lf.ϵ = FT.(ϵ)
end
jitter(rng::AbstractRNG, lf::JitteredLeapfrog) = _jitter(rng, lf)
diff --git a/test/abstractmcmc.jl b/test/abstractmcmc.jl
index 207eb21f..52d6c35c 100644
--- a/test/abstractmcmc.jl
+++ b/test/abstractmcmc.jl
@@ -9,7 +9,7 @@ include("common.jl")
θ_init = randn(rng, 2)
nuts = NUTS(0.8)
- hmc = HMC(0.05, 100)
+ hmc = HMC(100; integrator = Leapfrog(0.05))
hmcda = HMCDA(0.8, 0.1)
integrator = Leapfrog(1e-3)
diff --git a/test/constructors.jl b/test/constructors.jl
index f3c7dd37..5deb2df3 100644
--- a/test/constructors.jl
+++ b/test/constructors.jl
@@ -8,16 +8,16 @@ include("common.jl")
@testset "$T" for T in [Float32, Float64]
@testset "$(nameof(typeof(sampler)))" for (sampler, expected) in [
(
- HMC(T(0.1), 25),
+ HMC(25, integrator = Leapfrog(T(0.1))),
(
adaptor_type = NoAdaptation,
metric_type = DiagEuclideanMetric{T},
integrator_type = Leapfrog{T},
),
),
- # This should peform the correct promotion for the 2nd argument.
+ # This should perform the correct promotion for the 2nd argument.
(
- HMCDA(T(0.1), 1),
+ HMCDA(T(0.8), 1, integrator = Leapfrog(T(0.1))),
(
adaptor_type = StanHMCAdaptor,
metric_type = DiagEuclideanMetric{T},
@@ -48,6 +48,22 @@ include("common.jl")
integrator_type = Leapfrog{T},
),
),
+ (
+ NUTS(T(0.8); integrator = :jitteredleapfrog),
+ (
+ adaptor_type = StanHMCAdaptor,
+ metric_type = DiagEuclideanMetric{T},
+ integrator_type = JitteredLeapfrog{T,T},
+ ),
+ ),
+ (
+ NUTS(T(0.8); integrator = :temperedleapfrog),
+ (
+ adaptor_type = StanHMCAdaptor,
+ metric_type = DiagEuclideanMetric{T},
+ integrator_type = TemperedLeapfrog{T,T},
+ ),
+ ),
]
# Make sure the sampler element type is preserved.
@test AdvancedHMC.sampler_eltype(sampler) == T