Problem Set 9

This problem set is due on May 2, 2022 at 11:59am.

Step 1: Download this file locally.

Step 2: Complete the assignment

Step 3: Knit the assignment as either an html or pdf file.

Step 4: Submit your file here through this canvas link.


  • Name:
  • UNCC ID:
  • Other student worked with (optional):

For this problem set, you’re going to run the measurement error examples considered in 15.1 in the book and Lecture 17. However, you’re going to run them in the format of the project, namely the Bayesian workflow.

You may find this code from the lecture to be helpful. Instead of using rethinking, you may also alternatively use brms using Solomon Kurz’s rethinking version of Chapter 15.

Either use this Rmarkdown file or create a new one for scratch for this project.

1. Initial Model

  • Part 1: Provide in DAG form the initial model shown on page 492. Be sure to differentiate the unobserved nodes. What is a way that we can rewrite the regression into multiple parts (see Lecture 17 slides 37-38)? Explain in 2-3 sentences the DAG.

  • Part 2: Write out your statistical model in ulam form. This is slides 41-42 on Lecture 17. When does the distinction between data and parameters important in this model and when is it not (see around 30:00 in Lecture 17 video)

For this part, it’s okay if you technically run the model, which would be part 3. We’ll need the model specification for the prior predictive check next.

2. Prior Predictive Check

Conduct a prior predictive simulation for your model. You may find the earlier example of R code 5.4 to be helpful from Chapter 5. You will need to use the extract.prior() function. Set your seed to set.seed(10). To do the prior predictive check, you will need to put c(-2,2) for the A and M inputs. Plot your model.

3. Fit the Model

Run the m15.1 model if you haven’t already in part 1. Run a precis(depth=2) function to display the summarized model results. Compare your results to what was in Lecture 17 / page 494.

4. Validate Computation

Run convergence diagnostics like trace plots, trank plots, and examining the Rhat and n_eff. Are you satisfied with the convergence of your model? Make an argument why or why not.

5. Posterior Predictive Check

Run a posterior predictive check as a scatterplot. You will want to first display the original points (ignores model error) and the posterior means. You can find the code here. In order to run this, you’ll need to rerun the similar model but without model error.

Explain what is going on in this plot.

You are welcome to run cross-validation (e.g., WAIC/PSIS) on the two models to and compare their model fit.

Revise Model: Repeat for error in both outcome and predictor

Follow similar model code in 15.5 to extend your model to include measurement error for marriage rate (M). You will need to repeat the same five steps above for this new version of the model: specify the model/DAG, run prior predictive check, fit the model, validate computation, and run posterior predictive check (see R code 15.6).

In your posterior predictive check, rerun the code to produce Figure 15.3 in the book (slide 58-59 in the Lecture).

Compare your results to running the model without measurement error. Did including error on M increase or decrease the effect of M on D?

Next