## Import your data here
## Example:
## data_raw <- palmerpenguins::penguins
data_raw <- Mini Project 3
Supervised Learning and Model Evaluation
Overview
In Mini Project 1, your team explored data and told a short story with visuals. In Mini Project 2, your team built a cleaning pipeline and created a trustworthy, analysis ready table.
In Mini Project 3, your team will take one more step: use data to make a prediction and evaluate how well it works on new data.
Complete the Quarto file by
- Defining a clear prediction task
- Choosing a target variable and a small set of predictors
- Splitting the data into a training set and a test set
- Fitting 2 simple supervised learning models
- Comparing model performance on the test set
- Explaining results and limitations in plain language
Keep the project simple and honest. Do NOT do advanced tuning, deep learning, or a large number of models.
What you will submit
Show, in your Posit Cloud,
- The rendered HTML report
- The complete version of this source qmd file
- Any data file you used, only if it is not a built in package data set
- Any cleaned data file you created for this project, if applicable
What you will present
A 10 minute team presentation that explains
- What your prediction task was
- Which variables you used
- How you split the data
- Which 2 models you compared
- Which model performed better on the test set
- What your team learned and what the model cannot tell us
Team Info
Team Name: Your Team Name
Team members and roles for this project:
- Project lead (keeps time, coordinates tasks): Your Member Name(s)
- Data and feature lead (prepares target and predictors): Your Member Name(s)
- Modeling lead (fits models and organizes output): Your Member Name(s)
- Evaluation and presentation lead (compares models and prepares presentation): Your Member Name(s)
Project rules
Keep the project simple.
- Choose one prediction question
- Use one data set
- Use no more than 6 predictors
- If you do a classification project, choose a binary target variable
- Fit exactly 2 simple models
- Use one training and test split
- Use 1 or 2 evaluation metrics
- Explain results in plain language
- Do NOT claim causation
- Do NOT overstate what your model can do
Step 1: Choose a data set
Choose one option below. Two teams may use the same data set.
Data sets
penguinsdata set from R packagepalmerpenguinsmpgdata set from R packageggplot2loans_full_schemadata set from R packageopenintro- Your cleaned data from Mini Project 2, with instructor approval
- Your own data set, with instructor approval
Choose a data set and target that are manageable for a short team project. Your goal is not to build the most accurate model possible. Your goal is to demonstrate a clear and correct prediction workflow.
Use the code chunk import-data below to import your data, and call the main table data_raw.
Quick description of your data set
Answer the following questions.
- What does one row represent
Answer:
- Why is this data set suitable for a prediction task
Answer:
- What variable do you want to predict
Answer:
- Is your task a regression problem or a classification problem
Answer:
Step 2: Mini proposal
Write short answers. Keep them specific.
- What is your prediction question
Answer:
- Who might care about this prediction question
Answer:
- What is your target variable
Answer:
- Which predictors do you plan to use, and why
Answer:
- Why is this a reasonable introductory project
Answer:
- What is one challenge or limitation you expect
Answer:
Step 3: Prepare your modeling data
Create a table called data that is ready for modeling.
Keep this preparation light and focused. You may
- Select useful variables
- Filter rows
- Handle missing values
- Recode values
- Create a small number of simple derived variables
Do NOT turn this into another full wrangling project. If major cleaning was already done in Mini Project 2, briefly summarize that and then move on.
## Prepare your modeling data here
data <- Describe your modeling data
- How many rows are in
data
Answer:
- What is the target variable
Answer:
- Which predictors did you keep
Answer:
- Did you remove any rows or variables, and why
Answer:
Step 4: Quick check of the target and predictors
Before fitting models, inspect your variables.
## Check your target and predictors here
## Suggestions:
## glimpse(data)
## summary(data)Target check
Describe the target variable.
- If regression, what is its range and general distribution
- If classification, what are the class counts
Answer:
Predictor check
Describe any issues you noticed with the predictors, such as missing values, unusual values, or highly unbalanced categories.
Answer:
Step 5: Split the data into training and test sets
Use one train and test split. Set a seed so that the split can be reproduced.
A common choice is about 80 percent for training and the rest for testing.
set.seed(3570)
## Create training and test sets here
## Example logic:
## n <- nrow(data)
## train_id <- sample(seq_len(n), size = round(0.75 * n))
## train <- data[train_id, ]
## test <- data[-train_id, ]Why do we split the data
In 2 to 4 sentences, explain why the test set is important.
Answer:
Split summary
- Number of rows in training set: Answer:
- Number of rows in test set: Answer:
If classification, report the class counts in both sets.
Answer:
Step 6: Model 1
Choose a simple first model.
Recommended Model 1
- If regression, use linear regression
- If classification, use logistic regression
If you want to use a different Model 1, get instructor approval.
Model 1 formula
Write your model formula in words.
Answer:
## Fit Model 1 here
model_1 <- Model 1 interpretation
Briefly describe what Model 1 is doing.
Answer:
Step 7: Model 2
Choose a second simple model that is different from Model 1.
Recommended Model 2
- If regression, use a decision tree for regression
- If classification, use a decision tree for classification
If you want to use a different Model 2, get instructor approval.
Why choose this second model
Answer:
## Fit Model 2 here
model_2 <- Model 2 interpretation
Briefly describe what Model 2 is doing.
Answer:
Step 8: Make predictions on the test set
Use both models to predict the target variable on the test set.
## Create predictions from both models here
## Suggested output names:
## pred_1
## pred_2Step 9: Evaluate model performance
Use the test set only.
If your project is regression
Choose 1 or 2 of the following:
- RMSE (root mean square error)
- MAE (mean absolute error)
- \(R^2\)
Also include one simple plot such as predicted versus actual values.
If your project is classification
Choose 1 or 2 of the following:
- Accuracy
- Misclassification rate
- Sensitivity
- Specificity
Also include a confusion matrix.
## Evaluate Model 1 and Model 2 here
## Report your test set performance clearlyResults summary
Fill in the performance of both models.
Model 1
Answer:
Model 2
Answer:
Which model did better
Answer:
Was the difference large or small
Answer:
Step 10: Show one helpful output
Create one output that helps the audience understand the model results.
Examples:
- A confusion matrix
- A predicted versus actual plot
- A small table comparing metrics
- A simple visualization of prediction errors
## Add one helpful output hereExplain why this output helps the audience understand the results.
Answer:
Step 11: Interpret the results in plain language
Answer the following questions.
- What did your team learn from this prediction task
Answer:
- What can your model do reasonably well
Answer:
- Where might your model fail or be less reliable
Answer:
- What should we be careful not to claim from this project
Answer:
- If you had more time, what is one reasonable next step
Answer:
Step 12: Model honesty checklist
Confirm that your report shows the following:
- You clearly defined a prediction question
- You identified the target and predictors
- You used a reproducible train and test split
- You fit exactly 2 models
- You evaluated performance on the test set
- You explained results in plain language
- You discussed at least one limitation
- You did not overclaim what the model means
Revise your report if needed.
Step 13: Team reflection
Each team member writes 2 to 4 sentences:
- What you contributed
- One thing you learned about supervised learning
- One thing you would improve next time
Member 1: your name
Answer:
Member 2: your name
Answer:
Member 3: your name
Answer:
Member 4: your name (if applicable)
Answer:
Step 14: Presentation plan
Plan a 10 minute talk with the suggested structure:
- About 1 minute: data set and prediction question
- About 2 minutes: target, predictors, and data preparation
- About 2 minutes: train and test split
- About 2 minutes: Model 1 and Model 2
- About 2 minutes: test set results and comparison
- About 1 minute: takeaway and limitations
Presentation order
teams <- c("Team 1","Superb Statisticians", "The Data Scientists", "Stat Padders", "Data Divers", "Plot Squad")
set.seed(19)
sample(teams, 6, replace = FALSE)[1] "Data Divers" "Superb Statisticians" "Plot Squad"
[4] "The Data Scientists" "Team 1" "Stat Padders"
Grading guide
Total 15 points:
- Clear prediction question, target, and predictors (3 pts)
- Reasonable train and test workflow and appropriate model setup (4 pts)
- Correct evaluation and honest comparison of the 2 models (4 pts)
- Clear interpretation, limitations, and communication (4 pts)