Python implementations of some of the fundamental Machine Learning models and algorithms from scratch.
The goal of this project is not to create algorithms that are as streamlined and computationally efficient as possible, but rather to present their inner workings in a clear and usable manner.
- Adaboost
- Apriori
- Bayesian Regression
- DBSCAN
- Decision Tree
- Elastic Net
- FP-Growth
- Gaussian Mixture Model
- Genetic Algorithm
- K Nearest Neighbors
- K-Means
- Lasso Regression
- Linear Regression
- Logistic Regression
- Multilayer Perceptron
- Naive Bayes
- Perceptron
- Principal Component Analysis
- Random Forest
- Ridge Regression
- Support Vector Machine
- XGBoost
Please go through the whole Contributing Guidelines here.
- Make sure you do not copy codes from external sources because that work will not be considered. Plagiarism is strictly not allowed.
- You can only work on issues that have been assigned to you.
- If you want to contribute to an existing algorithm, we prefer that you create an issue before making a PR and link your PR to that issue.
- If you have modified/added code work, make sure the code compiles before submitting.
- Strictly use snake_case (underscore_separated) in your file_name and push it incorrect folder.
- Do not update the README.md.
- Your files should be uploaded inside the *code folder into the corresponding language folder (For instance, if you wrote code for a K-Means Implementation, it goes inside the K-Means folder).
- Under no circumstances create new folders within the language folders to upload your code unless specifically told to do so.
- Edit the corresponding README.md file to add the link to your code in the corresponding section (GitHub Markdown Guide)
The value of a strong contribution stays beyond everything and gives you satisfaction 👍🌟
You can find our Code of Conduct here.
This project follows the MIT License.
Aditya Kumar Gupta 💻 🖋 |
Ashwani Rathee 💻 |
Yukti Sachdeva 💻 |