Sample size is two more than the number of predictors, and it is finally possible to estimate the fit of the overall model. Residual standard error: 0.2375 on 1 degrees of freedom Sample size is one more than the number of predictors.Īll parameters are estimated including the constant. ![]() > summary(fitmodel(n=11, k=10)) Call:ĪLL 11 residuals are 0: no residual degrees of freedom!į-statistic: NaN on 10 and 0 DF, p-value: NA It is only possible to estimate 10 parameters, one of which is the constant. Sample size is the same as the number of predictors. > summary(fitmodel(n=10, k=10)) Call:ĪLL 10 residuals are 0: no residual degrees of freedom!Ĭoefficients: (1 not defined because of singularities)į-statistic: NaN on 9 and 0 DF, p-value: NA It is only possible to estimate 9 parameters, one of which is the constant. Sample size is one less than the number of predictors. Multiple R-squared: 1, Adjusted R-squared: NaNį-statistic: NaN on 8 and 0 DF, p-value: NA Residual standard error: NaN on 0 degrees of freedom set.seed(1)įitmodel summary(fitmodel(n=9, k=10)) Call:ĪLL 9 residuals are 0: no residual degrees of freedom!Ĭoefficients: (2 not defined because of singularities) I wrote this little simulation to highlight the relationship between sample size and parameter estimation in multiple regression. In such cases, interpretation of both the bivariate and model based importance indices can be useful, as a variable important in a bivariate sense might be hidden in a model by other correlated predictors ( I elaborate more on this here with links). If you include many predictors, it is often more likely that you include predictors that are highly intercorrelated.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |