- Maximum Likelihood estimation
- Least Square errors
- Justification for minimizing squared error ( Error ~ N(0,1))
- Wrap-up of Least Square Errors
- Gradient Descent
- Assessing performance of regression model --> determining loss/cost
- Overfitting
- Generalization (true) Error
- error = Bias, Variance, Noise
- Bias-Variance Trade-off in model complexity
- Regularization: dealing with infinitely many solutions
- Ridge regression --> adding curvature
- Bias, variance, and irreducible error
- Importance of validation
- Leave one out validation
- K-fold validation
- Choosing hyperparameters
- Benefits of L1 Regularization
- Coordinate Descent Algorithm
- Subgradients
- Norms and Convexity
- Logistic Regression
- Classification
- Introduction to Optimization
- Gradient Descent
- Stochastic Gradient Descent
- Perceptrons training algorithm
- Linear separability
- Kernel Trick: separation by moving to higher dimensional space
- Support Vector Machines (SVM)
- SVM as an optimization problem
- SVM is a quadratic optimization problem
- K-nearest Neighbors
- More in-depth on Kernel trick
- Commmon kernels
- Kernelized ridge regression
- Random Feature Trick
- Building confidence intervals with Bootstrap
- K-means (unsupervised learning)
- Low-rank approximations
- Frame PCA as a variance-maximizing optimization problem
- SVD
- Low-rank approximations
- Relation to PCA
- Unsupervised Learning
- Probablistic Interpretation of Classification
- feedforward, convolutional, recurrent
- backpropogation
- autodifferentiation
- Decision Trees
- Bagging (bootstrap aggregation)
- Random Forests
- Boosting
- Probability review
- Expectation, variance
- Linear algebra review
- intro to python
- Maximum Likelihood Estimation (MLE)
- Bias - Variance trade-off
- Linear Regression
- Ridge Regression
- Test error and training error
- Norms and Convexity
- LASSO regularization - Coordinate Descent
- Binary Logistic Regression
- Gradient Descent & Stochastic Gradient Descent
- Kernel Trick and Kernelized ridge regression
- Multivariate Gaussians
- K-means
- Bootstrap
- Expectation Maximization (Mixture Models)
- Alternating Minimizationg
- Low rank approximation
- Pytorch and Autodifferentiation