library(tidyverse)
library(tidymodels)
HW 07 - Predicting attitudes towards marijuana
This homework is due May 1 at 11:59pm ET.
Getting started
Go to the info2950-sp24 organization on GitHub. Click on the repo with the prefix hw-07. It contains the starter documents you need to complete the homework.
Clone the repo and start a new project in RStudio. See the Lab 0 instructions for details on cloning a repo and starting a new R project.
Workflow + formatting
Make sure to
- Update author name on your document.
- Label all code chunks informatively and concisely.
- Follow the Tidyverse code style guidelines.
- Make at least 3 commits.
- Resize figures where needed, avoid tiny or huge plots.
- Use informative labels for plot axes, titles, etc.
- Consider aesthetic choices such as color, legend position, etc.
- Turn in an organized, well formatted document.
You will estimate a series of machine learning models for this homework assignment. I strongly encourage you to make use of code caching in the Quarto document to decrease the rendering time for the document.
Data and packages
We’ll use the tidyverse and tidymodels packages for this assignment.
The General Social Survey is a biannual survey of the American public.
Over the past twenty years, American attitudes towards marijuana have softened extensively. In the early 2010s, the number of Americans who believed marijuana should be legal began to outnumber those who thought it should not be legal.
data/gss.rds
contains a selection of variables from the 2021 GSS. The outcome of interest grassv
is a factor variable coded as either "should be legal"
(respondent believes marijuana should be legal) or "should not be legal"
(respondent believes marijuana should not be legal).
<- read_rds(file = "data/gss.rds")
gss ::skim(gss) skimr
Name | gss |
Number of rows | 3225 |
Number of columns | 25 |
_______________________ | |
Column type frequency: | |
factor | 22 |
numeric | 3 |
________________________ | |
Group variables | None |
Variable type: factor
skim_variable | n_missing | complete_rate | ordered | n_unique | top_counts |
---|---|---|---|---|---|
colath | 1107 | 0.66 | FALSE | 2 | yes: 1471, not: 647 |
colmslm | 1110 | 0.66 | FALSE | 2 | not: 1422, yes: 693 |
degree | 20 | 0.99 | TRUE | 5 | hig: 1275, bac: 823, gra: 623, ass: 293 |
fear | 7 | 1.00 | FALSE | 2 | no: 2094, yes: 1124 |
grassv | 2345 | 0.27 | FALSE | 2 | sho: 663, sho: 217 |
gunlaw | 34 | 0.99 | FALSE | 2 | fav: 2157, opp: 1034 |
happy | 14 | 1.00 | TRUE | 3 | pre: 1851, not: 736, ver: 624 |
health | 8 | 1.00 | FALSE | 4 | goo: 1828, exc: 662, fai: 606, poo: 121 |
hispanic | 27 | 0.99 | FALSE | 22 | not: 2822, mex: 205, pue: 44, spa: 38 |
income16 | 409 | 0.87 | TRUE | 26 | $17: 306, $60: 299, $90: 265, $75: 247 |
letdie1 | 2193 | 0.32 | FALSE | 2 | yes: 737, no: 295 |
owngun | 89 | 0.97 | FALSE | 3 | no: 2035, yes: 1093, ref: 8 |
partyid | 29 | 0.99 | TRUE | 8 | ind: 667, str: 651, not: 427, str: 414 |
polviews | 54 | 0.98 | TRUE | 7 | mod: 1103, lib: 495, con: 493, sli: 389 |
pray | 65 | 0.98 | FALSE | 6 | sev: 911, nev: 701, onc: 536, les: 443 |
pres16 | 1030 | 0.68 | FALSE | 4 | cli: 1204, tru: 817, oth: 135, did: 39 |
race | 43 | 0.99 | FALSE | 3 | whi: 2473, bla: 373, oth: 336 |
region | 0 | 1.00 | FALSE | 9 | sou: 645, eas: 530, pac: 465, wes: 340 |
sex | 70 | 0.98 | FALSE | 2 | fem: 1773, mal: 1382 |
sexfreq | 1479 | 0.54 | TRUE | 7 | not: 516, 2 o: 278, onc: 236, abo: 226 |
wrkstat | 5 | 1.00 | FALSE | 8 | wor: 1435, ret: 792, wor: 288, kee: 258 |
zodiac | 285 | 0.91 | FALSE | 12 | cap: 297, sco: 275, sag: 271, aqu: 265 |
Variable type: numeric
skim_variable | n_missing | complete_rate | mean | sd | p0 | p25 | p50 | p75 | p100 | hist |
---|---|---|---|---|---|---|---|---|---|---|
id | 0 | 1.00 | 2225.25 | 1292.22 | 1 | 1096 | 2224 | 3335 | 4471 | ▇▇▇▇▇ |
age | 262 | 0.92 | 52.24 | 17.30 | 18 | 37 | 53 | 66 | 89 | ▅▇▇▇▃ |
hrs1 | 1523 | 0.53 | 40.17 | 13.16 | 0 | 40 | 40 | 45 | 89 | ▁▂▇▁▁ |
You can find the documentation for each of the available variables using the GSS Data Explorer. Just search by the column name to find the associated description.
Exercises
Exercise 1
Selecting potential features. For each of the variables below, explain whether or not you think they would be useful predictors for grassv
and why.
degree
happy
zodiac
id
Now is a good time to render, commit (with a descriptive and concise commit message), and push again. Make sure that you commit and push all changed documents and your Git pane is completely empty before proceeding.
Exercise 2
Partitioning your data. Reproducibly split your data into training and test sets. Allocate 75% of observations to training, and 25% to testing. Partition the training set into 10 distinct folds for model fitting. Unless otherwise stated, you will use these sets for all the remaining exercises.
Now is a good time to render, commit (with a descriptive and concise commit message), and push again. Make sure that you commit and push all changed documents and your Git pane is completely empty before proceeding.
Exercise 3
Fit a null model. To establish a baseline for evaluating model performance, we want to estimate a null model. This is a model with zero predictors. In the absence of predictors, our best guess for a classification model is to predict the modal outcome for all observations (e.g. if a majority of respondents in the training set believe marijuana should be legal, then we would predict that outcome for every respondent).
The parsnip package includes a model specification for the null model. Fit the null model using the cross-validated folds. Report the accuracy, ROC AUC values, and confusion matrix for this model. How does the null model perform?
Now is a good time to render, commit (with a descriptive and concise commit message), and push again. Make sure that you commit and push all changed documents and your Git pane is completely empty before proceeding.
Exercise 4
Fit a basic logistic regression model. Estimate a simple logistic regression model to predict grassv
as a function of age
, degree
, happy
, partyid
, and sex
. Fit the model using the cross-validated folds without any explicit feature engineering.
Report the accuracy, ROC AUC values, and confusion matrix for this model. How does this model perform?
Now is a good time to render, commit (with a descriptive and concise commit message), and push again. Make sure that you commit and push all changed documents and your Git pane is completely empty before proceeding.
Exercise 5
Fit a basic random forest model. Estimate a random forest model to predict grassv
as a function of all the other variables in the dataset (except id
). In order to do this, you need to impute missing values for all the predictor columns. This means replacing missing values (NA
) with plausible values given what we know about the other observations.
To do this you should build a feature engineering recipe that does the following:
- Omits the
id
column as a predictor - Remove rows with an
NA
forgrass
- we want to omit observations with missing values for outcomes, not impute them - Use median imputation for numeric predictors
- Use modal imputation for nominal predictors
- Downsample the outcome of interest to have an equal number of observations for each level
Fit the model using the cross-validated folds and the ranger
engine, and report the accuracy, ROC AUC values, and confusion matrix for this model. How does this model perform?
Now is a good time to render, commit (with a descriptive and concise commit message), and push again. Make sure that you commit and push all changed documents and your Git pane is completely empty before proceeding.
Exercise 6
Fit a nearest neighbors model. Estimate a nearest neighbors model to predict grassv
as a function of all the other variables in the dataset (except id
). Use recipes to pre-process the data as necessary to train a nearest neighbors model. Be sure to also perform the same pre-processing as for the random forest model (e.g. omitting NA
outcomes, imputation). Make sure your step order is correct for the recipe.
To determine the optimal number of neighbors, tune over at least 10 possible values.
Tune the model using the cross-validated folds and report the ROC AUC values for the five best models. How do these models perform?
Now is a good time to render, commit (with a descriptive and concise commit message), and push again. Make sure that you commit and push all changed documents and your Git pane is completely empty before proceeding.
Exercise 7
Fit a penalized logistic regression model. Estimate a penalized logistic regression model to predict grassv
. Use the same feature engineering recipe as for the \(5\)-nearest neighbors model.
Tune the model over its two hyperparameters: penalty
and mixture
. Create a data frame containing combinations of values for each of these parameters. penalty
should be tested at the values 10^seq(-6, -1, length.out = 20)
, while mixture
should be tested at values c(0, 0.2, 0.4, 0.6, 0.8, 1)
.
Tune the model using the cross-validated folds and the glmnet
engine, and report the ROC AUC values for the five best models. Use autoplot()
to inspect the performance of the models. How do these models perform?
Now is a good time to render, commit (with a descriptive and concise commit message), and push again. Make sure that you commit and push all changed documents and your Git pane is completely empty before proceeding.
Exercise 8
Tune the random forest model. Revisit the random forest model used previously. This time, implement hyperparameter tuning over the mtry
and min_n
to find the optimal settings. Use at least ten combinations of hyperparameter values. Report the best five combinations of values and their ROC AUC values. How do these models perform?
Now is a good time to render, commit (with a descriptive and concise commit message), and push again. Make sure that you commit and push all changed documents and your Git pane is completely empty before proceeding.
Exercise 9
Pick the best performing model. Select the best performing model. Train that recipe + model using the full training set and report the accuracy, ROC AUC, and confusion matrix using the held-out test set of data. Visualize the ROC curve. How would you describe this model’s performance at predicting attitudes towards the legalization of marijuana?
Now is a good time to render, commit (with a descriptive and concise commit message), and push again. Make sure that you commit and push all changed documents and your Git pane is completely empty before proceeding.
Bonus (optional) - Battle Royale
For those looking for a challenge (and a slight amount of extra credit for this assignment), train a high-performing model to predict grassv
. You must use tidymodels to train this model.
To evaluate your model’s effectiveness, you will generate predictions for a held-back secret test set of respondents from the survey. These can be found in data/gss-test.rds
. The data frame has an identical structure to gss.rds
, however I have not included the grassv
column. You will have no way of judging the effectiveness of your model on the test set itself.
To evaluate your model’s performance, you must create a CSV file that contains your predicted probabilities for grassv
. This CSV should have three columns: id
(the id
value for the respondent), .pred_should be legal
, and .pred_should not be legal
. You can generate this CSV file using the code below:
bind_cols(
gss_secret_test,predict(best_fit, new_data = gss_secret_test, type = "prob")
|>
) select(id, starts_with(".pred")) |>
write_csv(file = "data/gss-preds.csv")
where gss_secret_test
is a data frame imported from data/gss-test.rds
and best_fit
is the final model fitted using the entire training set.
Your CSV file must
- Be structured exactly as I specified above.
- Be stored in the
data
folder and named"gss-preds.csv"
.
If it does not meet these requirements, then you are not eligible to win this challenge.
The three students with the highest ROC AUC as calculated using their secret test set predictions will earn an extra (uncapped) 10 points on this homework assignment. For instance, if a student earned 45/50 points on the other components and was in the top-three, they would earn a 55/50 for this homework assignment.
Render, commit, and push one last time. Make sure that you commit and push all changed documents and your Git pane is completely empty before proceeding.
Wrap up
Submission
- Go to http://www.gradescope.com and click Log in in the top right corner.
- Click School Credentials \(\rightarrow\) Cornell University NetID and log in using your NetID credentials.
- Click on your INFO 2950 course.
- Click on the assignment, and you’ll be prompted to submit it.
- Mark all the pages associated with exercise. All the pages of your homework should be associated with at least one question (i.e., should be “checked”).
- Select all pages of your .pdf submission to be associated with the “Workflow & formatting” question.
Grading
- Exercise 1: 2 points
- Exercise 2: 2 points
- Exercise 3: 4 points
- Exercise 4: 4 points
- Exercise 5: 8 points
- Exercise 6: 8 points
- Exercise 7: 8 points
- Exercise 8: 6 points
- Exercise 9: 4 points
- Bonus: 0 points (extra credit)
- Workflow + formatting: 4 points
- Total: 50 points
The “Workflow & formatting” component assesses the reproducible workflow. This includes:
- Following tidyverse code style
- All code being visible in rendered PDF (no more than 80 characters)
- Appropriate figure sizing, and figures with informative labels and legends
- Ensuring reproducibility by setting a random seed value.