Back to index
Overview
-
Intro
-
Ensemble Methods
- Foundations of Ensemble Learning - Concepts, Terminology, and the Wisdom of Crowds
- Bagging and Random Forests - Bootstrap Sampling, Model Diversity, and Majority Voting
- Heterogeneous Parallel Ensembles, Weighting Schemes, and Stacking Meta‑Learning
- Adaptive Boosting (AdaBoost) - Sequential Weak Learner Fusion via Instance Weighting
- Gradient Boosting and XGBoost - Residual‑Based Additive Modeling and Gradient‑Descent Optimization
-
Perceptron
-
Feed-Forward Neural Network
- Fundamentals, Motivation, and Application Domains of Artificial Neural Networks
- Feed‑Forward Neural Network Architecture and Multi‑Layer Perceptron Design
- Forward Propagation Mechanics in Feed‑Forward Neural Networks
- Backpropagation Learning Algorithm and Gradient Computation for MLPs
- Practical Implementation of Feed‑Forward Neural Networks with Keras and Parameter Accounting
-
Kuis 1
-
Convolutional Neural Network
-
Recurrent Neural Network (RNN Pt. 1)
-
Long Short-Term Memory (RNN Pt. 2)
-
Attention and Transformers
-
Reinforcement Learning