-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Neural Kernel Network implements #92
Neural Kernel Network implements #92
Conversation
* initial pass * Enable main tests * Make documenter a test dep * Fix travis * Some work * Some work * Tweak compat * Tweak compat again
* Fix Diagonal perf * Bump version * Update news
* Make compat less restrictive * Bump patch
* Make forwards-pass type stable for Float32 * Remove new space
* Basic GP examples * Relax version requirement * Complete plotting basics * Document examples * Demonstrate approximate inference with Titsias * Docuemntation * Furhter docs improvements * More docs and the process decomposition example * More docs, more examples * Sensor fusion * Tweak docs * More docs and more examples * More examples, more docs * WIP on GPPP + Pseudo-Points
adds examples
merge nkn kernel
Thanks for this PR. I'm really busy this week, so I'll do a proper review early next week. |
Codecov Report
@@ Coverage Diff @@
## wct/flux-nkn-integration #92 +/- ##
==========================================================
- Coverage 88.17% 75% -13.18%
==========================================================
Files 24 27 +3
Lines 685 844 +159
==========================================================
+ Hits 604 633 +29
- Misses 81 211 +130
Continue to review full report at Codecov.
|
fix indentation
fix indentation
fix indentation
Would you mind rebasing this on top of master so that it's easier to inspect the diff? |
PR #94 includes all the changes contained in PR #92, do I still have to do rebasing ?
在 2020年3月10日,22:32,willtebbutt <[email protected]> 写道:
Would you mind rebasing this on top of master so that it's easier to inspect the diff?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<#92>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AINMODE6ZYIXWXTR7M3NBC3RGZFOHANCNFSM4K7FT4IQ>.
|
It would be really helpful. It's not really possible to review it as it currently is. |
This PR is an attempt to implement Neural Kernel Network (NKN) for
Stheno
, a working example can be find hereInterface
Newly implemented types & function:
Primitive
: NKN can be viewed as a composite kernel,Primitive
serves as a container of all the basic kernels. It hasew
&pw
method implemented, but it isn't a subtype of Stheno'sKernel
. Calls likeew(<:Primitive, x)
&pw(<:Primitive, x)
will computeew
andpw
for each kernel insidePrimitive
, and then prepare them to be inputs to the following neural network.LinearLayer
: This is just a linear transformation z = W*x. The reason I create this type instead of using Flux'sDense
is because we don't need bias and activation functions here.Product
: A product function perform element wise multiplication of kernel matrices.NeuralKernelNetwork
: It's a subtype of Stheno'sKernel
type withew
andpw
method implemented, it can be viewed as a common Stheno's kernel.Supports
params
methodZygote
to compute gradient of thelogpdf
w.r.t all the parameters in NKNTo be discussed
In order to allow using Flux's
params
to extract all the parameters inside NKN, I slightly modify the definition and type of input variables ofScaled
,Stretched
andRQ
inkernels.jl
.Scaled
: the originalσ²
is replaced bylogσ²
, and it's type is restricted toAbstractVector
. The reason for doing so is thatσ²
should remain positive during the optimization, and Flux'sparams
method requires the type of the fields to be anAbstractArray
.Stretched
:a
is replaced byloga
and it's type is restricted toAbstractVecOrMat
( reason is the same as above ).RQ
:α
is replaced bylogα
and it's type is restricted toAbstractVector
( reason is the same as above ).PerEQ
: I noticed that this kernel hasn't been exported byStheno
yet, I reimplement and export it.NOTE: I only do some basic tests for these modification, it is not guaranteed to be type stable and may report bugs in other situations
Reference
[1] Shengyang Sun, Guodong Zhang, Chaoqi Wang, Wenyuan Zeng, Jiaman Li , Roger Grosse, Differentiable Compositional Kernel Learning for Gaussian Processes (2018)