diff --git a/cerebrium/prebuilt-models/introduction.mdx b/cerebrium/prebuilt-models/introduction.mdx
index d7e039a0..64e2d48b 100644
--- a/cerebrium/prebuilt-models/introduction.mdx
+++ b/cerebrium/prebuilt-models/introduction.mdx
@@ -9,6 +9,12 @@ Cerebrium and its community keep a library of popular pre-built models that you
- Post a [bounty](https://www.cerebrium.ai/bounties) for our community to create it
- [Contact](mailto:support@cerebrium.ai) the Cerebrium team and we will see what we can do
-You can deploy prebuilt models via Cerebrium by using a simple one-click deploy from your dashboard by navigating to the Prebuilt tab.
+You can deploy prebuilt models via Cerebrium by using a simple one-click deploy from your dashboard by navigating to the Prebuilt tab.
+Otherwise, if you would like to read through the source code, you can navigate to the [Cerebrium Prebuilts Github](https://github.com/CerebriumAI/cerebrium-prebuilts) where you can find the source code for each of the models.
+Each model's folder is a cortex deployment that can be deployed using the `cerebrium deploy` command. Simply navigate to the folder of the model you would like to deploy and run the command.
-Check out the available models through your dashboard or by reading our docs below!
+```bash
+cerebrium deploy <> --config-file config.yaml
+```
+
+Check out the available models through your Cerebrium dashboard or by reading our docs in the **Prebuilt Models** tab!
diff --git a/cerebrium/prebuilt-models/language-models/mt0.mdx b/cerebrium/prebuilt-models/language-models/mt0.mdx
deleted file mode 100644
index 54a4199c..00000000
--- a/cerebrium/prebuilt-models/language-models/mt0.mdx
+++ /dev/null
@@ -1,61 +0,0 @@
----
-title: "MT0"
-description: "MT0 is a model capable of following human instructions in dozens of languages"
----
-
-It is pretrained on a cross-lingual task mixture (xP3) and the resulting model is capable of cross-lingual generalization to unseen tasks & languages.You can read more [here](https://huggingface.co/bigscience/mt0-xl). We currently have the following MT0 models available below however if you would like any others contact support, and we can quickly add it for you. To deploy it, you can use the identifier below:
-
-- mt0-xxl: `mt0-xxl`
-
-Once you've deployed a MT0 model, you can supply the endpoint with a prompt. Here's an example of how to call the deployed endpoint:
-
-#### Request Parameters
-
-
-```bash Request
- curl --location --request POST 'https://run.cerebrium.ai/mt0-xxl-webhook/predict' \
- --header 'Authorization: ' \
- --header 'Content-Type: application/json' \
- --data-raw '{
- "prompt": "Translate from french to English: Je t'aime."
- }'
-```
-
-
-
- This is the Cerebrium API key used to authenticate your request. You can get
- it from your Cerebrium dashboard.
-
-
- The prompt you would like mt0 to process.
-
-
-
-
-```json Response
-{
- "run_id": "",
- "run_time_ms": 251,
- "message": "Successfully generated text",
- "result": "I love you"
-}
-```
-
-
-
-#### Response Parameters
-
-
- A unique identifier for the run that you can use to associate prompts with
- webhook endpoints.
-
-
- The amount of time in millisecond it took to run your function. This is what
- you will be billed for.
-
-
- Whether of not the response was successful
-
-
- The result generated from mt0
-
diff --git a/cerebrium/prebuilt-models/language-models/roberta.mdx b/cerebrium/prebuilt-models/language-models/roberta.mdx
deleted file mode 100644
index 64fb212b..00000000
--- a/cerebrium/prebuilt-models/language-models/roberta.mdx
+++ /dev/null
@@ -1,69 +0,0 @@
----
-title: "Roberta"
-description: "Raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task"
----
-
-RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. It was trained on English language using a masked language modeling (MLM) objective.
-It was introduced in this [paper](https://arxiv.org/abs/1907.11692) and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it makes a difference between english and English.
-We currently have the following Roberta models available below however if you would like any others contact support, and we can quickly add it for you. To deploy it, you can use the identifier below:
-
-- Roberta Large: `roberta-large`
-
-Once you've deployed a Roberta model, you can supply the endpoint with a prompt. Here's an example of how to call the deployed endpoint:
-
-#### Request Parameters
-
-
-```bash Request
- curl --location --request POST 'https://run.cerebrium.ai/roberta-large-webhook/predict' \
- --header 'Authorization: ' \
- --header 'Content-Type: application/json' \
- --data-raw '{
- "prompt": " is the capital of France"
- }'
-```
-
-
-
- This is the Cerebrium API key used to authenticate your request. You can get it from your Cerebrium dashboard.
-
-
- The prompt you would like Roberta to process. Please make sure that you include the \ keyword.
-
-
-
-
-```json Response
- "run_id": "",
- "run_time_ms": 251,
- "message": "Successfully generated text",
- "result": [{'sequence': "Paris",
- 'score': 0.3317350447177887,
- 'token': 2943,
- 'token_str': 'Paris'},
- {'sequence': "Nice",
- 'score': 0.14171843230724335,
- 'token': 2734,
- 'token_str': 'Nice'},
- ...
- ]`
-```
-
-
-
-#### Response Parameters
-
-
- A unique identifier for the run that you can use to associate prompts with
- webhook endpoints.
-
-
- The amount of time in millisecond it took to run your function. This is what
- you will be billed for.
-
-
- Whether of not the response was successful
-
-
- The result generated from Roberta
-
diff --git a/cerebrium/prebuilt-models/other-models/salesforce-codegen.mdx b/cerebrium/prebuilt-models/other-models/salesforce-codegen.mdx
deleted file mode 100644
index 572d1f0c..00000000
--- a/cerebrium/prebuilt-models/other-models/salesforce-codegen.mdx
+++ /dev/null
@@ -1,68 +0,0 @@
----
-title: "Salesforce Codegen"
-description: "CodeGen is a family of autoregressive language models for program synthesis"
----
-
-CodeGen is a family of autoregressive language models for program synthesis. Mono models were trained on Python code and multi was trained on multiple programming languages such as C, C++, Go, Java, JS and Python.
-The best way to use CodeGen is to give it a prompt describing the code you would like it to generate - you can read more [here](https://huggingface.co/Salesforce/codegen-350M-multi).
-
-We currently have the following Codegen models available below however if you would like any others contact support, and we can quickly add it for you. To deploy it, you can use the identifier below:
-
-- Codegen 350M-multi: `sf-codegen-350-multi`
-
-Once you've deployed a Codegen model, you can supply the endpoint with a prompt. Here's an example of how to call the deployed endpoint:
-
-#### Request Parameters
-
-
-```bash Request
- curl --location --request POST 'https://run.cerebrium.ai/sf-codegen-350-multi-webhook/predict' \
- --header 'Authorization: ' \
- --header 'Content-Type: application/json' \
- --data-raw '{
- "prompt": "Generate a python function that prints 'Hello world'",
- "max_sequence_length": 200
- }'
-```
-
-
-
- This is the Cerebrium API key used to authenticate your request. You can get
- it from your Cerebrium dashboard.
-
-
- The prompt you would like Codegen to process.
-
-
- The max sequence length that codegen can generate
-
-
-
-
-```json Response
-{
- "run_id": "",
- "run_time_ms": 251,
- "message": "Successfully generated text",
- "result": "def hello_world(self):\nprint \"Hello world\""
-}
-```
-
-
-
-#### Response Parameters
-
-
- A unique identifier for the run that you can use to associate prompts with
- webhook endpoints.
-
-
- The amount of time in millisecond it took to run your function. This is what
- you will be billed for.
-
-
- Whether of not the response was successful
-
-
- The result generated from Codegen
-
diff --git a/mint.json b/mint.json
index 9e25cc16..64d76a1b 100644
--- a/mint.json
+++ b/mint.json
@@ -116,20 +116,14 @@
"group": "Language Models",
"pages": [
"cerebrium/prebuilt-models/language-models/whisper",
- "cerebrium/prebuilt-models/language-models/mt0",
"cerebrium/prebuilt-models/language-models/flanT5",
"cerebrium/prebuilt-models/language-models/GPT-Neo",
- "cerebrium/prebuilt-models/language-models/roberta",
"cerebrium/prebuilt-models/language-models/pygmalion",
"cerebrium/prebuilt-models/language-models/tortoise",
"cerebrium/prebuilt-models/language-models/gpt4all",
"cerebrium/prebuilt-models/language-models/llamav2",
"cerebrium/prebuilt-models/language-models/falcon"
]
- },
- {
- "group": "Other",
- "pages": ["cerebrium/prebuilt-models/other-models/salesforce-codegen"]
}
]
},