This guide will walk you through the process of setting up ModelSmith using Docker, including how to configure the necessary environment variables.
Before you begin, make sure you have Docker and Docker Compose installed on your system. If you haven't installed them yet, follow these steps:
-
Install Docker:
- For Windows and Mac: Download and install Docker Desktop from the official Docker website.
- For Linux: Follow the installation instructions for your specific distribution on the Docker documentation.
-
Install Docker Compose:
- Docker Desktop for Windows and Mac includes Docker Compose by default.
- For Linux, follow the installation instructions in the Docker Compose documentation.
Once you have Docker and Docker Compose installed, follow these steps to set up ModelSmith:
First, build the Docker images for both the frontend and backend:
cd modelsmith
docker-compose build
This command will create the necessary images based on the configurations in your `docker-compose.yml` file.
Start the frontend container with the following command:
docker run -d --name ms-frontend-container -p 4200:4200 ms-frontend
This command runs the frontend container in detached mode (`-d`), names it `ms-frontend-container`, and maps port 4200 of the container to port 4200 on your host machine.
Before running the backend container, you need to set up the environment variables. These variables configure various aspects of the backend, including paths, access tokens, and connection details. Here's an explanation of each variable:
MACHINE_LEARNING_CORE_PATH
: Path to the machine learning core within the container. The default value is "../machine_learning_core".CONDA_SH_PATH
: Path to the Conda shell script for environment activation within the container. The default value is "../../root/miniconda3/etc/profile.d/conda.sh".HUGGING_FACE_ACCESS_TOKEN
: Your Hugging Face access token for model downloads. Please refer to the AutoAWQ Configuration GuideCONNECTION_TYPE
: Type of connection ("LOCAL" for docker container machine with host GPU).
Note: The initial backend configuration will take up to 5-10 minutes.
Now, run the backend container with the following command, replacing the placeholder values with your actual configuration:
docker run -d --name ms-backend-container \
-e "MACHINE_LEARNING_CORE_PATH=/app/machine_learning_core" \
-e "CONDA_SH_PATH=/root/miniconda3/etc/profile.d/conda.sh" \
-e "HUGGING_FACE_ACCESS_TOKEN=your_hugging_face_token_here" \
-e "CONNECTION_TYPE=LOCAL" \
-p 3000:3000 \
ms-backend
Make sure to replace the placeholder values with your actual configuration details.
To configure the necessary features, enter the backend container:
docker exec -it ms-backend-container /bin/bash
This command opens a bash shell inside the running backend container.
Inside the backend container, configure AutoAWQ by following the instructions in the AutoAWQ Configuration Guide:
Still inside the backend container, configure Multiflow by following the instructions in the Multiflow Configuration Guide:
These configurations are necessary to enable 100% of the application's functionalities. If skipped, only machine learning, quantization, and pruning features will work.
The backend container will perform the following initialization steps:
- VM Initialization: If the `machine_learning_core` folder is not already copied to the container, it will copy it from the specified path.
- Conda Environment Setup: If the Conda environment is not found, it will install and configure it for the project.
This ensures that all necessary dependencies and configurations are set up correctly before starting the backend services.
After running these commands, you should be able to access:
- The frontend at `http://localhost:4200\`
- The backend at `http://localhost:3000\`
If you encounter any issues:
- Ensure all required ports (4200 and 3000) are available and not in use by other applications.
- Check Docker logs for any error messages:
docker logs ms-frontend-container docker logs ms-backend-container
- Verify that all environment variables are correctly set in the backend container command.
For further assistance, please refer to our FAQ section or contact our support team.