
Applied Classical Machine Learning
This course turns you from an ML beginner into a practitioner, teaching you to build deployable solutions using Scikit-learn and XGBoost. Through hands-on labs like predicting diabetes progression and detecting cancer, you'll master key algorithms and model tuning. Learn to diagnose overfitting and deliver actionable insights for real-world challenges.
Instructor: Ridwan Ibidunni
Available to members only
Prerequisite
Python + Pandas/NumPy
Data preprocessing
Data visualization
Basic stats
Google Colab
Recommended: Completion of “Python for ML”
Modules & Duration
8 modules
7 weeks + final project
Assignments
14 assignments
4 projects
What you'll learn
Foundations of ML: Differentiate supervised/unsupervised learning and master bias-variance tradeoffs.
Ensemble Methods: Boost accuracy with Random Forests and XGBoost (e.g., Titanic survival prediction).
Regression Mastery: Implement linear/polynomial regression, regularization (Ridge/Lasso), and predict real-world trends (e.g., COVID-19 cases)
Unsupervised Learning: Cluster customers using k-means, reduce dimensions with PCA/t-SNE, and extract business insights.
Classification Techniques: Build logistic regression, SVM, and decision tree models for problems like cancer detection and spam classification.
Model Evaluation: Diagnose overfitting, optimize hyperparameters via GridSearchCV, and select metrics (ROC-AUC, F1-score).
Skills you’ll gain
Data splitting
Precision
recall
PCA
Decision trees
Random Forest
t-SNE
Supervised vs. unsupervised learning
SVM
AdaBoost
F1-score
ROC curve
Scikit-learn
Bias-variance tradeoff
K-means
cross-validation
XGBoost
Matplotlib
Hierarchical clustering
hyperparameter tuning
Add certificate to your LinkedIn profile
Taught in English
Regularization (Ridge, Lasso)
Google Colab
VS Code
Linear/polynomial regression
Logistic regression
Machine learning drives decisions in healthcare, finance, and tech; but only when theory meets practice. This course transforms you from an ML novice into a practitioner who builds deployable solutions. Through hands-on labs; like predicting diabetes progression, detecting cancer, and segmenting retail customers, you’ll master classical algorithms using Scikit-learn and XGBoost. Learn to diagnose overfitting, tune models for real-world noise, and deliver insights that stakeholders trust. Enroll to turn data into decisions.
Brief Course Description - 8 modules
-
What you'll learn:
Identify ML types, split datasets, and diagnose bias-variance tradeoffs.
Includes:2 hours of live virtual lectures
Hands-on: Polynomial overfitting lab, data splitting with stratification
Assignment: Analyze a real-world ML use case (e.g., recommendation systems)
Dataset: Iris
-
What you'll learn:
Implement linear regression, interpret coefficients, and handle skewed data.
Includes:2 hours of live virtual lectures
Hands-on: Boston Housing price prediction, residual analysis
Debug Challenge: Fix skewed target variables
Tools: Scikit-learn, Seaborn
-
What you'll learn:
Combat overfitting with Ridge/Lasso regression and balance L1/L2 penalties.
Includes:2 hours of live virtual lectures
Hands-on: Diabetes dataset regularization, coefficient shrinkage
Project: Predict COVID-19 case trends
Dataset: Diabetes, COVID-19
-
What you'll learn:
Build binary classifiers, tune probability thresholds, and visualize decision boundaries.
Includes:2 hours of live virtual lectures
Hands-on: Spam email classification, SVM kernel comparison
Debug Challenge: Address class imbalance with SMOTE
Dataset: Email Spam
-
What you'll learn:
Train interpretable trees, evaluate models using confusion matrices, and optimize precision-recall.
Includes:2 hours of live virtual lectures
Hands-on: Iris species classification, ROC curve plotting
Project: Cancer detection with Wisconsin dataset
Tools: Scikit-learn
-
What you'll learn:
Leverage Random Forests for robustness and XGBoost for speed/accuracy tradeoffs.
Includes:2 hours of live virtual lectures
Hands-on: Customer churn prediction, feature importance analysis
Assignment: Titanic survival prediction
Dataset: Titanic
-
What you'll learn:
Cluster customers via k-means/hierarchical methods and compress dimensions with PCA/t-SNE.
Includes:2 hours of live virtual lectures
Hands-on: RFM-based customer segmentation, MNIST visualization
Project: Retail customer profiling
Tools: Scikit-learn, Seaborn
-
What you'll learn:
Optimize models using cross-validation and evaluate imbalanced data. Includes:
Guided project: End-to-end Kaggle competition
Tasks: Fraud detection with precision-recall/ROC analysis
Deliverable: Business-ready ML pipeline
Grading: Functionality (40%), EDA (20%), tuning (10%)
Why This Course Stands Out
Industry Projects: Solve problems with healthcare (cancer detection), finance (fraud), and retail (segmentation) datasets.
Debug-First Approach: Fix real-world issues like overfitting, leakage, and imbalance in every module.
Peer-Driven Learning: Collaborative reviews and pair programming mirror workplace workflows.
Portfolio: Showcase 4 projects on LinkedIn.