Exercise 4.9

Answers

When K increases, there are less points available to train the models, so the learned models become worse, and Eval(gm) becomes larger for each model m. gm is the model with the lowest validation error among all models, so Eval(gm) will also increases with k. The same logic applies to Eout(gm) as well since Eval(gm) is an estimate of Eout(gm).

When K increases to certain large value, there are much less points N K available to train the models, complex models converge to simple models(TODO), so the optimistic validation error Eval(gm) is closer to all validation errors Eval(gm) for all models. Since each single validation error Eval(gm) is an unbiased estimate of the Eout(gm), the optimistic validation error Eval(gm) converges to Eout(gm) as well.

User profile picture
2021-12-08 09:31
Comments