I am working on setting up regression models for prediction in psychometrics and ran into challenges with cross validation. Essentially, I would like to have cross validated linear regression models while reducing the number of terms as much as possible and ideally, it should be cross validated. So far, I implemented a model selection on the basis of best subset regression (leaps) and the selected models can afterwards be cross validated via repeated k-fold CV (self implemented), which unfortunately requires a lot of computational power and I am not sure if this is the best way to go.
I would like to switch to glmnet with ridge regression and cross validation and afterwards use the ideal model for further computation with the linear model. The idea is:
- Determine model with cv.glmnet (Ridge regression)
- Take the coefficients from the glmnet model with min lambda.
- Feed the coefficients into lm and continue working with the lm model.
So my question: Is there a way to fix terms and weights in linear model (lm function in R)?
1 Answer 1
One way to fix a weight is just subtract the fixed term from the dependent variable and perform your regression on the new dependent variable. You would essentially be doing regression on the residuals.
Explore related questions
See similar questions with these tags.