Skip to content

Improving Gradient-guided Nested Sampling for Posterior Inference #373

@josephdviviano

Description

@josephdviviano

https://arxiv.org/abs/2312.03911

to implement. A Markov-Chain Monte Carlo (MCMC) algorithm is used to sample from the reward distribution. Backward sampling from these terminal states are used to generate off-policy trajectories.

Metadata

Metadata

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions