Unit Structure
Unsupervised Learning Algorithms
|
├── 1. Clustering Techniques
│ ├── K-Means
│ ├── Silhouette Scores (Evaluation)
│ ├── Hierarchical Clustering
│ ├── Fuzzy C-Means
│ └── DBScan (Density-Based Clustering)
│
└── 2. Dimensionality Reduction Techniques
├── Low Variance Filter
├── High Correlation Filter
├── Backward Feature Elimination
├── Forward Feature Selection
├── Principal Component Analysis (PCA)
└── Projection Methods
- Not all machine learning involves labeled data — in fact, sometimes we just want the machine to find patterns on its own. That’s where Unsupervised Learning comes in! In this unit, we dive into two powerful unsupervised approaches: clustering and dimensionality reduction.
- Let’s start with clustering, where the goal is to group similar data points together. You’ll explore the classic K-Means algorithm, evaluate clusters using Silhouette Scores, and go deeper with techniques like Hierarchical Clustering, Fuzzy C-Means for soft clustering, and DBScan, which works great for oddly shaped clusters.
- Once clustering is covered, we move on to dimensionality reduction — a must when you’re working with huge datasets with too many features. You’ll learn techniques like Low Variance and High Correlation Filters to eliminate unhelpful features, as well as Forward and Backward Feature Selection to pick the most useful ones. Finally, you’ll explore more advanced methods like Principal Component Analysis (PCA) and Projection Methods, which help reduce complexity while keeping the core structure of your data intact.