library(reticulate)
py_install("seaborn")Mini Project 3
A Python Prediction Challenge
Overview
In Mini Project 1, your team explored data in R and told a short story with visuals. In Mini Project 2, your team used R to clean, reshape, and organize data into a trustworthy analysis-ready table.
In Mini Project 3, your team will switch to Python and take the next step: build a simple prediction model and evaluate how well it works on new data.
Think of your team as a small beginner data science studio. A client has asked for a quick first prediction tool. Your job is not to build the most advanced model. Your job is to build a clear, correct, honest, and reproducible workflow.
This project should stay at an introductory data science level. Keep it simple.
Your task
Complete this Quarto file by doing the following:
- Choose one prediction question
- Import data into Python
- Prepare a small modeling data set
- Split the data into a training set and a test set
- Fit 2 simple supervised learning models
- Compare model performance on the test set
- Explain what the results mean, and what they do not mean
This is not a competition to get the highest possible accuracy. It is a project about learning the prediction workflow.
AI use is allowed
You are allowed to use AI tools to help with this project. For example, AI may help you:
- Write or debug Python code
- Explain error messages
- Suggest ways to clean or recode variables
- Remind you how to calculate evaluation metrics
- Help revise your writing for clarity
However, your team is still responsible for all final work. You must:
- Check that the code actually runs
- Check that the model setup is appropriate
- Check that the interpretation is statistically correct
- Follow the course AI policy and clearly document your AI use when required
Do not copy AI output into your report without checking it carefully.
What you will submit
Show, in your Posit Cloud project or other approved course environment:
- The rendered HTML report
- The complete source
.qmdfile - Any data file you used, if it is not built into a Python package
- Any cleaned data file you created for this project, if applicable
What you will present
A 10 minute team presentation that explains:
- Your prediction question
- Your data and variables
- Your train and test split
- The 2 models you compared
- Which model performed better on the test set
- What your team learned, including limitations
Team Info
Team Name: Your Team Name
Team members and roles for this project:
- Project lead (keeps time, coordinates tasks): Your Member Name(s)
- Python workflow lead (imports data, prepares code): Your Member Name(s)
- Modeling lead (fits models, organizes outputs): Your Member Name(s)
- Evaluation and presentation lead (compares models, prepares slides): Your Member Name(s)
Project rules
- Choose one prediction question
- Use one data set
- Use no more than 6 predictors
- If you do classification, use a binary target variable
- Fit exactly 2 simple models
- Use one train and test split
- Use 1 or 2 evaluation metrics
- Explain results in plain language
- Do not claim causation
- Do not overstate what your model can do
- Do not use advanced tuning, random forests, boosting, deep learning, or many competing models
Python setup
Use code like the block below to load the packages you need in your working version of the project.
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.tree import DecisionTreeRegressor, DecisionTreeClassifier, plot_tree
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score, accuracy_score, confusion_matrix, ConfusionMatrixDisplayStep 1: Choose your prediction challenge
Choose one data set and create one clear prediction question.
Suggested data options
titanicfromseaborn
Example question: Can we predict whether a passenger survived?mpgfromseaborn
Example question: Can we predict a car’smpg?penguinsfromseaborn
Example question: Can we predict whether a penguin is Gentoo or not?Your cleaned data from Mini Project 2, exported as a
.csvfile and imported into PythonYour own data set, with instructor approval
Choose a data set and target that are manageable for a short team project. Your goal is not to build the most accurate model possible. Your goal is to demonstrate a clear prediction workflow that you understand.
Import your data
Use the chunk below to import your data and call the main table data_raw.
# Example 1
# data_raw = sns.load_dataset("titanic")
# data_raw
# # Example 2
# data_raw = sns.load_dataset("mpg")
# data_raw
# # Example 3
# data_raw = sns.load_dataset("penguins")
# data_raw
# Example 4
# data_raw = pd.read_csv("your_clean_data.csv")
# data_raw = ...Quick description
- What does one row represent?
Answer:
- What is your prediction question?
Answer:
- What is your target variable?
Answer:
- Is your task regression or classification?
Answer:
- Who might care about this prediction question?
Answer:
Step 2: Prepare your modeling data
Create a table called data that is ready for modeling.
Keep this preparation focused. You may:
- Select a small set of useful variables
- Filter rows
- Handle missing values
- Recode a variable
- Create a small number of simple derived variables
- Convert categorical variables into dummy variables if needed
Do not turn this into another major wrangling project.
# Prepare your modeling data here
# Example ideas:
# data = data_raw[[...]].dropna().copy()
# data["high_mpg"] = (data["mpg"] >= data["mpg"].median()).astype(int)
# data = pd.get_dummies(data, drop_first=True)
data = ...Describe your modeling data
- How many rows are in
data?
Answer:
- What is the target variable?
Answer:
- Which predictors did you keep, and why?
Answer:
- Did you remove any rows or variables? If yes, why?
Answer:
Step 3: Check the target and predictors
Before modeling, inspect the variables you plan to use.
# Suggestions:
# data.head()
# data.info()
# data.describe(include="all")Quick check
- If regression, what is the range and general distribution of the target?
- If classification, what are the class counts?
- Did you notice any issues, such as missing values, unusual values, or imbalanced classes?
Answer:
Step 4: Create training and test sets
Use one reproducible train and test split.
A common choice is about 80 percent for training and 20 percent for testing.
# Define your predictors and target
# Example:
# X = data.drop(columns=["target_name"])
# y = data["target_name"]
X = ...
y = ...
# For classification, you may use stratify=y
# For regression, use stratify=None
X_train, X_test, y_train, y_test = train_test_split(
X,
y,
test_size=0.20,
random_state=3570,
stratify=None,
)Why do we split the data?
In 2 to 4 sentences, explain why the test set matters.
Answer:
Split summary
- Number of rows in training set: Answer:
- Number of rows in test set: Answer:
If classification, report the class counts in both sets.
Answer:
Step 5: Fit Model 1
Recommended Model 1
- If regression, use linear regression
- If classification, use logistic regression
If you want a different Model 1, get instructor approval.
Model 1 formula in words
Describe your model in words.
Answer:
# Fit Model 1 here
# Examples:
# model_1 = LinearRegression()
# model_1 = LogisticRegression(max_iter=1000)
model_1 = ...
model_1.fit(X_train, y_train)What is Model 1 doing?
Answer:
Step 6: Fit Model 2
Recommended Model 2
- If regression, use a decision tree regressor
- If classification, use a decision tree classifier
To keep the model simple and interpretable, use a small tree. For example, you may set max_depth = 3 or max_depth = 4.
If you want a different Model 2, get instructor approval.
Why did you choose this second model?
Answer:
# Fit Model 2 here
# Examples:
# model_2 = DecisionTreeRegressor(max_depth=3, random_state=3570)
# model_2 = DecisionTreeClassifier(max_depth=3, random_state=3570)
model_2 = ...
model_2.fit(X_train, y_train)What is Model 2 doing?
Answer:
Step 7: Make predictions and evaluate both models
Use the test set only for evaluation.
# Predictions
# For regression, predict() returns predicted values.
# For classification, predict() returns predicted class labels.
pred_1 = model_1.predict(X_test)
pred_2 = model_2.predict(X_test)If your project is regression
Choose 1 or 2 of the following:
- RMSE
- MAE
- \(R^2\)
You should also include one simple plot, such as predicted versus actual values.
# Example regression metrics
# rmse_1 = mean_squared_error(y_test, pred_1, squared=False)
# mae_1 = mean_absolute_error(y_test, pred_1)
# r2_1 = r2_score(y_test, pred_1)If your project is classification
Choose 1 or 2 of the following:
- Accuracy
- Misclassification rate
- Sensitivity
- Specificity
You should also include a confusion matrix.
If you want predicted probabilities, use predict_proba(). If you want class labels, use predict().
# Example classification metrics
# acc_1 = accuracy_score(y_test, pred_1)
# cm_1 = confusion_matrix(y_test, pred_1)# Evaluate Model 1 and Model 2 hereResults summary
Model 1
Answer:
Model 2
Answer:
Which model did better on the test set?
Answer:
Was the difference large or small?
Answer:
Step 8: Show one helpful output
Create one output that helps the audience understand the results.
Examples:
- A confusion matrix
- A predicted versus actual plot
- A small comparison table of metrics
- A simple plot of prediction errors
- A shallow decision tree plot
# Add one helpful output hereExplain why this output helps the audience understand the results.
Answer:
Step 9: Explain the results honestly
Answer the following questions.
- What did your team learn from this prediction task?
Answer:
- What can your model do reasonably well?
Answer:
- Where might your model fail or be less reliable?
Answer:
- What should we be careful not to claim from this project?
Answer:
- If you had more time, what is one reasonable next step?
Answer:
Step 10: Team reflection
Each team member writes 2 to 4 sentences:
- What you contributed
- One thing you learned about supervised learning in Python
- One thing you would improve next time
Member 1: your name
Answer:
Member 2: your name
Answer:
Member 3: your name
Answer:
Member 4: your name (if applicable)
Answer:
Step 11: Presentation plan
Plan a 10 minute talk with the following structure:
- About 1 minute: data set and prediction question
- About 2 minutes: data preparation and chosen variables
- About 2 minutes: train and test split
- About 2 minutes: Model 1 and Model 2
- About 2 minutes: test set results and comparison
- About 1 minute: takeaway and limitations
Presentation order will be announced in class.
Grading guide
Total 15 points:
- Clear prediction question, target, and predictors (3 pts)
- Reasonable Python workflow, train and test split, and model setup (4 pts)
- Correct evaluation and honest comparison of the 2 models (4 pts)
- Clear interpretation, limitations, and communication (4 pts)