This is a demonstration solution based on designing a scalable ML App adhering to end to end Machine Learning Life Cycle - MLOPS
set python path by creating .env file if you are using VS with MS Python Interpretor , create the env file and then install the requirements
WORKSPACE_FOLDER=/home/asok/toolbox/mlops/End_To_End_Model_Deployment_MLOps
PYTHONPATH=${WORKSPACE_FOLDER}
Feast ML - feature Store- https://docs.feast.dev/getting-started/create-a-feature-repository
Associated feature store configuration is in notebooks/Feast_Feature_Store.ipynb
feast materialize 2020-01-01T00:00:00 2021-07-29T00:00:00
Execute the below commands for auto styling and formatting
black .
flake8
isort .
DataOps and MLOps Pipelines are configured as DAGS, run it using
export AIRFLOW_HOME=${PWD}/airflow
airflow webserver --port 8082
model experiments are caputired in folder /Mlruns and are viewable in UI at
mlflow ui
netstat -ano|findstr "PID :5000"
taskkill /pid 18264 /f
sudo lsof -i:5000
sudo kill PID
Run the great expectations checkpoint thought
great_expectations --v3-api checkpoint run credit_transactions
https://docs.greatexpectations.io/docs/deployment_patterns/how_to_run_a_checkpoint_in_airflow
Serving model is in folder /serving
unicorn app:app --reload
host:port/docs
You need to install the prometheous and grafana.
-
Configure the FAST API end point in the prometheus.ini file
-
ADD SMTP info for alerts in the graffana.ini file
After you expose the FastAPI, the metrics are captured in prometheous and are viewable in the graffana.
You need to configure the grafana dashbord to view the metrics as per the metrics content
Looking to include active learning into the pipeline.