diff --git a/cerebrium/environments/initial-setup.mdx b/cerebrium/environments/initial-setup.mdx index 33b2c2b0..1b4b3dc6 100644 --- a/cerebrium/environments/initial-setup.mdx +++ b/cerebrium/environments/initial-setup.mdx @@ -64,7 +64,7 @@ The parameters for your config file are the same as those which you would use as ## Config File Example ```toml -# This file was automatically generated by Cerebrium as a starting point for your project. +# This file was automatically generated by Cerebrium as a starting point for your project. # You can edit it as you wish. # If you would like to learn more about your Cerebrium config, please visit https://docs.cerebrium.ai/cerebrium/environments/initial-setup#config-file-example diff --git a/examples/langchain.mdx b/examples/langchain.mdx index a1a5a472..17564cdb 100644 --- a/examples/langchain.mdx +++ b/examples/langchain.mdx @@ -148,7 +148,6 @@ We then integrate Langchain with a Cerebrium deployed endpoint to answer questio Your cerebrium.toml file is where you can set your compute/environment. Please make sure that the hardware you specify is a AMPERE_A5000, and that you have enough memory (RAM) on your instance to run the models. You cerebrium.toml file should look like: - ```toml [cerebrium.build] diff --git a/examples/sdxl.mdx b/examples/sdxl.mdx index 65a91f80..ff293ca2 100644 --- a/examples/sdxl.mdx +++ b/examples/sdxl.mdx @@ -104,7 +104,6 @@ def predict(item, run_id, logger): Your cerebrium.toml file is where you can set your compute/environment. Please make sure that the hardware you specify is a AMPERE_A5000 and that you have enough memory (RAM) on your instance to run the models. You cerebrium.toml file should look like: - ```toml [cerebrium.build] diff --git a/examples/transcribe-whisper.mdx b/examples/transcribe-whisper.mdx index 94331d5b..f9af07b2 100644 --- a/examples/transcribe-whisper.mdx +++ b/examples/transcribe-whisper.mdx @@ -123,7 +123,6 @@ In our predict function, which only runs on inference requests, we simply create Your cerebrium.toml file is where you can set your compute/environment. Please make sure that the hardware you specify is a AMPERE_A5000 and that you have enough memory (RAM) on your instance to run the models. You cerebrium.toml file should look like: - ```toml [cerebrium.build] @@ -157,6 +156,7 @@ openai-whisper [cerebrium.requirements.conda] ``` + To deploy the model use the following command: ```bash