Open-source course by Decoding ML in collaboration with Hopsworks.
This hands-on course teaches you how to build and deploy a real-time personalized recommender system for H&M fashion articles. You'll learn:
- A practical 4-stage recommender architecture
- Two-tower model implementation and training
- Scalable ML system design principles
- MLOps best practices
- Real-time model deployment
- LLM-enhanced recommendations
- Building an interactive web interface
This course is ideal for:
- ML/AI engineers interested in building production-ready recommender systems
- Data Engineers, Data Scientists, and Software Engineers wanting to understand the engineering behind recommenders
Note: This course focuses on engineering practices and end-to-end system implementation rather than theoretical model optimization or research.
Category | Requirements |
---|---|
Skills | Basic knowledge of Python and Machine Learning |
Hardware | Any modern laptop/workstation will do the job (no GPU or powerful computing power required). |
Level | Intermediate |
All tools used throughout the course will stick to their free tier, except OpenAI's API, as follows:
- Lessons 1-4: Completely free
- Lesson 5 (Optional): ~$1-2 for OpenAI API usage when building LLM-enhanced recommenders
This self-paced course consists of 5 comprehensive modules covering theory, system design, and hands-on implementation.
Our recommendation for each module:
- Read the article
- Run the Notebook (locally or on Colab)
- Go deeper into the code
Module | Article | Description | Local Notebooks | Colab Notebooks |
---|---|---|---|---|
1 | Building a TikTok-like recommender | Learn how to architect a recommender system using the 4-stage architecture and two-tower network. | No code | No code |
2 | Feature pipelines for TikTok-like recommenders | Learn how to build a scalable feature pipeline using a feature store. | β’1_fp_computing_features.ipynb | - |
3 | Training pipelines for TikTok-like recommenders | Learn to train and evaluate the two-tower network and ranking model using MLOps best practices. | β’2_tp_training_retrieval_model.ipynb β’3_tp_training_ranking_model.ipynb |
- |
4 | The inference pipelines | Learn how to deploy models for real-time inference (WIP) | β’4_ip_computing_item_embeddings.ipynb β’5_ip_creating_deployments.ipynb β’6_scheduling_materialization_jobs.ipynb |
- |
5 | Building personalized real-time recommenders with LLMs | Learn how to enhance recommendations with LLMs (WIP) | - | - |
Note
Check the INSTALL_AND_USAGE doc for a step-by-step installation and usage guide.
At Decoding ML we teach how to build production ML systems, thus the course follows the structure of a real-world Python project:
.
βββ notebooks/ # Jupyter notebooks for each pipeline
βββ recsys/ # Core recommender system package
β βββ config.py # Configuration and settings
β ...
β βββ training/ # Training pipelines code
βββ tools/ # Utility scripts
βββ streamlit_app.py # Streamlit app entry point
βββ .env.example # Example environment variables template
βββ Makefile # Commands to install and run the project
βββ pyproject.toml # Project dependencies
Try out our deployed H&M real-time personalized recommender: π» Live Streamlit Demo
Important
If you get ModelServingException
or ConnectionError
errors, the instances are still scaled to 0, so give it a few minutes to scale up. Then, refresh the page. This happens because we are in demo, 0-cost mode:
- Scaling from 0 to +1 instances may take 1-2 minutes.
- If you encounter connection errors, try selecting different customers or refresh the page.
- The system will become responsive once the deployment is active.
For detailed installation and usage instructions, see our INSTALL_AND_USAGE guide.
Recommendation: While you can follow the installation guide directly, we strongly recommend reading the accompanying articles to gain a complete understanding of the recommender system.
Have questions or running into issues? We're here to help!
Open a GitHub issue for:
- Questions about the course material
- Technical troubleshooting
- Clarification on concepts
When having issues with Hopsworks Serverless, the best place to ask questions is on Hopsworks's Slack, where their engineers can help you directly.
Hopsworks |
Paul Iusztin AI/ML Engineer |
Anca Ioana Muscalagiu AI/ML Engineer |
Paolo Perrone AI/ML Engineer |
Hopsworks's Engineering Team AI Lakehouse |
This course is an open-source project released under the MIT license. Thus, as long you distribute our LICENSE and acknowledge your project is based on our work, you can safely clone or fork this project and use it as a source of inspiration for your educational projects (e.g., university, college degree, personal projects, etc.).