-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduction to JuliaGPs + Turing #423
Conversation
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
I added two minor comments, but dope stuff:)
Unfortunately, there is not a simple way to enforce monotonicity in the samples from a GP, | ||
and we can see this in some of the plots above, so we must hope that we have enough data to | ||
ensure that this relationship approximately holds under the posterior. | ||
In any case, you can judge for yourself whether you think this is the most useful | ||
visualisation that we can perform -- if you think there is something better to look at, | ||
please let us know! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this is crazy (and I'm def not suggesting we put this in the tutorial), but could we perform a quick linear regression and use this as a prior for the GP?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmmm I'm not entirely sure that I understand. If I've understood correctly, the suggestion is to look at the data in order to inform our prior, which I'm not sure makes sense to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this would indeed make it, I dunno, "semi-Bayesian" or something, but it could be a useful trick to get monotonicity (though it wouldn't guarantee it). But I guess the proper way would be to just treat it as a multivariate normal and model the mean directly?
Another thing, is it sensible to put a prior on the GP prior?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well in some ways we are here -- we're placing a prior over the kernel parameters, which is implicitely placing a prior over the GP prior. Are you imagining something non-parametric though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome stuff @willtebbutt :)
I've written up the tutorial I made for the BSU workshop into a TuringTutorial. I think it's okay to go as-is, but I suspect that people will have ideas on how it could be improved, so I'd like to iterate a bit before merging.
I would appreciate the following two kinds of feedback: