Exam

April 18 we’ll have our in-class exam. It will be closed notes and based on chapter lectures, slides, and materials. It will be a blend of multiple choice/true-false questions (very similar to lecture quizzes) and short open ended questions (e.g., reading comprehension or explaining code snippets similar to problem sets). The exam is worth 200 points.

The expected time to complete the exam will be 1 hour but students may have the entire class period (12pm - 2:45pm) to complete.

The exam will cover material from Chapters 1 - 13 in Statistical Rethinking, which is consistent with the material from Lectures 1 - 13.

Part 1: Multiple choice/true-false questions (100 points)

Half of the exam will be multiple choice and true-false questions from Lesson quizzes 1-8. Some questions will be repeated and some questions will be new.

To help you study, I have prepared a pdf file with all of the questions and answers from Lesson quizzes 1-8. You can download the file here.

Similar to the lesson quizzes, multiple choice questions will be worth 4 points and true-false questions will be worth 2 points.

Part 2: Short answer questions (100 points)

There will also be 8-10 short answer questions. These will be motivated from Lectures 1-13 and chapter 1-13. Lecture materials are the most important with chapter readings helpful if you need a more in-depth discussion on lecture topics. Solutions to problem sets 1-7 may also be helpful to review.

Here is a list of the relevant topics from each of the 13 lectures that will be covered.

LectureTopics
1Models, hypotheses, “drawing the Bayesian owl,” Science before Statistics, DAGs/Causal inference vs prediction
2Bayesian data analysis, Garden of Forking Data / Bayesian marbles example, Globe tossing example, Posterior to prediction / sampling posterior
3Linear regression, why Normal distribution?, language for modeling / specifying Bayesian models, height-weight example, basics of generative modeling, prior/posterior sampling, simulation-based validation
4Causes aren’t in the data, categorical/index variables in regression, contrasts, polynomial, splines
5Interpretation and creation of DAGs, four elemental confounds, marriage/divorce example, plant height-fungus example, post-treatment bias, collider bias, descendents
6Parent-grandparent education example, DAG thinking / marginal effects, Do calculus, backdoor criterion, examples of finding adjustment set, good vs bad controls examples, Table 2 fallacy
7Leave one-out cross validation, regularization, importance sampling / PSIS, WAIC, model mis-selection, outliers and robust regression
8Four methods for estimating posterior considered in class, intuition for Markov Chain Monte Carlo and Hamiltonian MC, convergence diagnostics (trace/trank plots, Rhat, effective samples), determining good vs bad convergence
9Berkeley admission example / related examples, GLM modeling, logistic vs binomial regression, post-stratification
10Continued discussion on GLM, hidden collider bias in admissions example, sensitivity analysis, how/why more parameters than observations, oceanic tool example, poisson regression,
11Trolley problem example, ordered categories, endogenous selection, monotonic predictors, complex causal effects
12Models with memory / intro to multi-level models, coffeeshop example, tad pole/tank example, varying effects superstitions,
13Clusters and features, Chimpanzee example, multi-level predictions, handling divergent transitions
Next