This project is the final Capstone project of the Udacity Data Scientist Nanodegree program. The aim is to learn how to manipulate realistic datasets with Spark to engineer relevant features for predicting churn. Input data is related to the fictive music streaming service Sparkify (similar to Spotify and Pandora).
- Installation
- Project Motivation
- File Descriptions
- Project Process
- Result Summary
- Licensing, Authors, Acknowledgements
The code should run with no issues using Python versions 3. See the requirements.txt file for details about library versions.
Since I'm a long time user of the Spotify streaming service I've wanted to test out some ideas for a while. In this project I could work with similar data and also try out Spark which is new to me.
- mini_sparkify_event_data.json: Fictious music streaming data provided by Udacity. This file is not available in the repository due to it's size.
- sparkify.ipynb: Exploratory notebook with all steps necessary to build a model that predicts churn for the Sparkify data.
When working with this project I followed the CRISP-DM process. The details are found in the notebook and also on this blog post: https://medium.com/@marcusnilsson78/sparkify-project-write-up-5dfbaac36cc6
The model got a F1 score in validation of 46.2% which is not that good. As the dataset available was very small, it's hard to trust the model in it's current state. It should be trained and validated further on a greater dataset. Please see the blog mentioned above to read the complete results.
See details in licensing file.
Data in the .json file cannot be used without consent from Udacity.
Besides the Udacity training material, the following resources were of much help in this project:
- https://mapr.com/blog/predicting-breast-cancer-using-apache-spark-machine-learning-logistic-regression/
- https://stackoverflow.com/questions/36132322/join-two-data-frames-select-all-columns-from-one-and-some-columns-from-the-othe
- https://stackoverflow.com/questions/38664620/any-way-to-access-methods-from-individual-stages-in-pyspark-pipelinemodel?rq=1