You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Do you think it's sensible to have a flat priors on the always include variables as we discussed in Edinburgh? I think that this makes sense and I believe that this boils down to a simple numerical integral with a change of degrees of freedom. I'll look into the formulas a bit more, once I'm done with some other work.
Cheers,
Alexander
The text was updated successfully, but these errors were encountered:
It is an extension of the idea of the flat prior on the intercept that it always included.
In theory, the covariance of the g-prior would be defined as $X_M^T(I - P_{X.inc})X_M$ where $P_X.inc$ is the orthogonal projection on the column space of $X.inc$ (the always included variables) and $X_M$ are the variables that are under consideration for model $M$.
All the formulas for the log of the marginal likelihood would go through with changing df from $n-1$ to $n - p_inc$ and with an adjustment to define the "R2" to have the SS from the model with $X_inc$ (rather than the intercept) in the denominator, which would be easy to add.
The trickier part would be the bookkeeping with the post-processing functions that compute predictions and posterior distributions for coefficients as now only some will be shrunk.
Hi Merlise,
Do you think it's sensible to have a flat priors on the always include variables as we discussed in Edinburgh? I think that this makes sense and I believe that this boils down to a simple numerical integral with a change of degrees of freedom. I'll look into the formulas a bit more, once I'm done with some other work.
Cheers,
Alexander
The text was updated successfully, but these errors were encountered: