-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extending chain representations #57
Comments
|
Great, thanks @cpfiffer . The resume will change the state of an existing object, so I think it should have a I think I'm still confused what you mean by "state". There are a few things other than samples that can be important. Some are fixed size, like
And some that scale linearly with the number of samples:
I think only the first group should be considered "state", and per-sample diagnostics should be separate from samples and state (I'm currently calling them "info", which I think IIRC I got from Turing somewhere). I was thinking of having separate |
Hmm, and further complicating this is that samples = save!!(samples, sample, i, model, sampler, N; kwargs...) That could make this very tricky. |
I guess I could overload |
Sure, one can always roll a custom Regarding the points in the OP, I agree with what @cpfiffer said. function resume(rng::Random.AbstractRNG, chain, args...; kwargs...)
return sample(
rng, getmodel(chain), getsampler(chain), args...;
state=getstate(chain), kwargs...,
)
end |
Thanks @devmotion , I was hoping to allow a convergence criteria as a stopping condition, so this is great. There does seem to be an assumption that everything the user could need is required to be part of the sample. For DynamicHMC, my setup looks like this (AdvancedHMC will be very similar): @concrete struct DynamicHMCChain{T} <: AbstractChain{T}
samples # :: AbstractVector{T}
logp # log-density for distribution the sample was drawn from
info # Per-sample metadata, type depends on sampler used
meta # Metadata associated with the sample as a whole
state
transform
end Here
julia> samples(chain)
100-element TupleVector with schema (x = Float64, σ = Float64)
(x = -0.1±0.34, σ = 0.576±0.37)
julia> logp(chain)[1:5]
5-element ElasticArrays.ElasticVector{Float64, 0, Vector{Float64}}:
-1.136224195720376
-0.42132266397402207
-0.9789248604768969
-1.136224195720376
-0.8517859618293282
julia> info(chain)[1:5]
5-element ElasticArrays.ElasticVector{DynamicHMC.TreeStatisticsNUTS, 0, Vector{DynamicHMC.TreeStatisticsNUTS}}:
DynamicHMC.TreeStatisticsNUTS(-1.283461597663962, 3, turning at positions 6:9, 0.9703961901160859, 11, DynamicHMC.Directions(0xdfea943d))
DynamicHMC.TreeStatisticsNUTS(-1.150959614787742, 1, turning at positions 2:3, 0.9646928495527286, 3, DynamicHMC.Directions(0x74715257))
DynamicHMC.TreeStatisticsNUTS(-1.1699430991621091, 3, turning at positions 3:6, 1.0, 11, DynamicHMC.Directions(0x8472ffea))
DynamicHMC.TreeStatisticsNUTS(-1.941965877904205, 1, turning at positions 2:3, 0.7405784149911505, 3, DynamicHMC.Directions(0x8c1d2457))
DynamicHMC.TreeStatisticsNUTS(-1.3103584844087501, 2, turning at positions -2:1, 0.9999999999999999, 3, DynamicHMC.Directions(0x9483096d))
julia> meta(chain).H
Hamiltonian with Gaussian kinetic energy (Diagonal), √diag(M⁻¹): [1.1613920024118645, 0.7589536122573856]
julia> meta(chain).algorithm
DynamicHMC.NUTS{Val{:generalized}}(10, -1000.0, Val{:generalized}())
julia> meta(chain).ϵ
0.2634132789343616
julia> meta(chain).rng
Random._GLOBAL_RNG()
I guess I could cram my |
Yeah, that should work. I think you could just dump them into a tuple or small wrapper struct when you return them as |
I can't return them as |
Hi,
I had misunderstood some of the goals of this package, thankfully @cpfiffer got me straightened out. I'm looking into using this as an interface for populating SampleChains.
The docs seem mostly oriented toward people building new samplers, and not so much for people building new ways of representing chains. So I have lots of questions...
bundle_samples
. It seems like I should be able to instead overloadsave!!
and then havebundle_samples
be a no-op. Is that right?resume
function that can pick up sampling where it left off, without needing to go back through the warmup phase. But I don't see it in this repo. Where can I find it?I'm sure I'll have more to come, but this will get me started. Thanks :)
The text was updated successfully, but these errors were encountered: