You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
currently, basis functions for GLMM mode are selected via the following algorithm:
identify a maximum of $M = 5$ basis functions per individual subject
refit all subject-level basis functions on the entire dataset
use a negative-binomial LASSO model to remove collinear / uninformative basis functions by fitting $n_\lambda = 50$ possible models across values of the penalty parameter $\lambda$, choosing the "best" model as the one that minimizes AIC
using the set of retained basis functions from the "best" model, fit a negative-binomial GLMM with random intercepts and slopes for each basis function
this might not be ideal, as I've seen cases where the number of retained basis functions is "too high" leading to estimation issues and weird, jagged-looking fitted values when visualized
consider maybe choosing the "best" model as the one that is most sparse ? or perhaps do a combined ranking of sparsity and AIC and choose the model with the best mean rank ?
The text was updated successfully, but these errors were encountered:
currently, basis functions for GLMM mode are selected via the following algorithm:
this might not be ideal, as I've seen cases where the number of retained basis functions is "too high" leading to estimation issues and weird, jagged-looking fitted values when visualized
consider maybe choosing the "best" model as the one that is most sparse ? or perhaps do a combined ranking of sparsity and AIC and choose the model with the best mean rank ?
The text was updated successfully, but these errors were encountered: