This guide will show you how to put your own machine learning model in a Docker image using Cog. If you haven't got a model to try out, you'll want to follow the main getting started guide.
- macOS or Linux. Cog works on macOS and Linux, but does not currently support Windows.
- Docker. Cog uses Docker to create a container for your model. You'll need to install Docker before you can run Cog.
First, install Cog if you haven't already:
sudo curl -o /usr/local/bin/cog -L https://github.com/replicate/cog/releases/latest/download/cog_`uname -s`_`uname -m`
sudo chmod +x /usr/local/bin/cog
To configure your project for use with Cog, you'll need to add two files:
cog.yaml
defines system requirements, Python package dependencies, etcpredict.py
describes the prediction interface for your model
Use the cog init
command to generate these files in your project:
$ cd path/to/your/model
$ cog init
The cog.yaml
file defines all the different things that need to be installed for your model to run. You can think of it as a simple way of defining a Docker image.
For example:
build:
python_version: "3.11"
python_packages:
- "torch==2.0.1"
This will generate a Docker image with Python 3.11 and PyTorch 2 installed, for both CPU and GPU, with the correct version of CUDA, and various other sensible best-practices.
To run a command inside this environment, prefix it with cog run
:
$ cog run python
✓ Building Docker image from cog.yaml... Successfully built 8f54020c8981
Running 'python' in Docker with the current directory mounted as a volume...
────────────────────────────────────────────────────────────────────────────────────────
Python 3.11.1 (main, Jan 27 2023, 10:52:46)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
This is handy for ensuring a consistent environment for development or training.
With cog.yaml
, you can also install system packages and other things. Take a look at the full reference to see what else you can do.
The next step is to update predict.py
to define the interface for running predictions on your model. The predict.py
generated by cog init
looks something like this:
from cog import BasePredictor, Path, Input
import torch
class Predictor(BasePredictor):
def setup(self):
"""Load the model into memory to make running multiple predictions efficient"""
self.net = torch.load("weights.pth")
def predict(self,
image: Path = Input(description="Image to enlarge"),
scale: float = Input(description="Factor to scale image by", default=1.5)
) -> Path:
"""Run a single prediction on the model"""
# ... pre-processing ...
output = self.net(input)
# ... post-processing ...
return output
Edit your predict.py
file and fill in the functions with your own model's setup and prediction code. You might need to import parts of your model from another file.
You also need to define the inputs to your model as arguments to the predict()
function, as demonstrated above. For each argument, you need to annotate with a type. The supported types are:
str
: a stringint
: an integerfloat
: a floating point numberbool
: a booleancog.File
: a file-like object representing a filecog.Path
: a path to a file on disk
You can provide more information about the input with the Input()
function, as shown above. It takes these basic arguments:
description
: A description of what to pass to this input for users of the modeldefault
: A default value to set the input to. If this argument is not passed, the input is required. If it is explicitly set toNone
, the input is optional.ge
: Forint
orfloat
types, the value should be greater than or equal to this number.le
: Forint
orfloat
types, the value should be less than or equal to this number.choices
: Forstr
orint
types, a list of possible values for this input.
There are some more advanced options you can pass, too. For more details, take a look at the prediction interface documentation.
Next, add the line predict: "predict.py:Predictor"
to your cog.yaml
, so it looks something like this:
build:
python_version: "3.11"
python_packages:
- "torch==2.0.1"
predict: "predict.py:Predictor"
That's it! To test this works, try running a prediction on the model:
$ cog predict -i [email protected]
✓ Building Docker image from cog.yaml... Successfully built 664ef88bc1f4
✓ Model running in Docker image 664ef88bc1f4
Written output to output.png
To pass more inputs to the model, you can add more -i
options:
$ cog predict -i [email protected] -i scale=2.0
In this case it is just a number, not a file, so you don't need the @
prefix.
To use GPUs with Cog, add the gpu: true
option to the build
section of your cog.yaml
:
build:
gpu: true
...
Cog will use the nvidia-docker base image and automatically figure out what versions of CUDA and cuDNN to use based on the version of Python, PyTorch, and Tensorflow that you are using.
For more details, see the gpu
section of the cog.yaml
reference.
Next, you might want to take a look at: