A comparison of bagging and boosting for tabular supervised MLRandom forests vs gradient boosted trees3d ago3d ago
Deep dive into gradient-boosted trees: an intro to boosting ensembles 🚀Explaining how gradient boosting iteratively corrects errors to build a powerful ensemble.Mar 23Mar 23
How does a random forest reduce the variance of a decision tree?Explaining how the bootstrap aggregation procedure reduces the variance of an ensemble, and how the random forest extends the bagging…Mar 16Mar 16
How to stop decision trees from overfitting with pruning techniquesExplaining visually what it means for a decision tree to overfit training data, and using pruning techniques to fix it.Mar 10Mar 10
Decision trees entropy, impurity and recursive binary splittingExplaining the decision tree’s greedy recursive nature and the maths behind splitting criteria, with images and animations.Mar 2Mar 2
Information entropy: measuring information with mathsA deep dive into self-information and information entropy with a few thought experiments.Feb 23Feb 23
Tackling linearly inseparable data with the Support Vector MachineThe soft-margin SVM, feature maps, kernels and Mercer’s Theorem.Feb 16Feb 16
Optimising the Support Vector Machine using Lagrange multipliers (step-by-step breakdown)Defining “constrained optimisation”, how to solve such problems, and how this idea can be applied to the SVM.Feb 9Feb 9
Three vector tricks to understand the Support Vector Machine’s marginProjections, unit vectors and dot products explained and used to derive a formula for the margin of the Support Vector Machine.Feb 2Feb 2
Support Vector Machine setupSuppoer Vector Machine, margin, support vectors, contrained optimisation problem, with images and videos to provide visual aidJan 26Jan 26