Date Topic Subject Book Code
1-17 Classical problems Introduction; problems with classical methods 1.1-1.4 R
1-19 Large scale testing Family-wise error rates R
1-24 Large scale testing False discovery rates R
1-26 Large scale testing Local false discovery rates R
1-31 Large scale testing Hierarchical models and shrinkage R
2-02 Ridge regression Ridge regression 1.5.1-1.5.3 R
2-07 Ridge regression Selection of λ; case studies 1.5.4-1.5.6 R
2-09 Lasso KKT conditions, soft thresholding 2.1-2.2 R
2-14 Lasso (cont’d)
2-16 Lasso Algorithms 2.3-2.4 R
2-21 Lasso Cross-validaton 2.5-2.6 R
2-23 Lasso Case studies, Bayesian interpretation 2.7-2.9 R
2-28 Bias reduction Adaptive lasso, MCP, and SCAD 3.1-3.2 R
3-02 Bias reduction Algorithms; convexity 3.5-3.7 R
3-07 Bias reduction Case studies 3.8 R
3-09 Stability Elastic Net 4.1-4.2 R
3-14 No class; spring break
3-16 No class; spring break
3-21 Stability Algorithms; case studies 4.3-4.5 R
3-23 Theory Theoretical properties 5
3-28 Theory Theoretical properties: Non-asymptotic 5
3-30 Inference Marginal false discovery rates 6 R
4-04 Inference Debiasing and subsampling/resampling 7, 9 R
4-06 Inference Selective inference R
4-11 Inference (cont’d)
4-13 Inference Knockoff filter R
4-18 Other likelihoods Logistic regression 10 R
4-20 Other likelihoods Other likelihoods 11-12 R
4-25 Structured sparsity Group lasso 13 R
4-27 Structured sparsity Bi-level selection 14 R
5-02 Structured sparsity Fusion penalties 15 R
5-04 Structured sparsity Further applications of penalization and sparsity R
5-08 Final projects Penalized GEEs (paper)
Spatiotemporal exposure prediction (paper)
Spectral deconfounding (paper)
Preconditioning for sign consistency (paper)
ADMM for quantile regression (paper)
Best subset selection (paper 1) (paper 2)