Skip to content

Commit c4cf42b

Browse files
committed
Small fixes
1 parent 16139f4 commit c4cf42b

File tree

2 files changed

+4
-3
lines changed

2 files changed

+4
-3
lines changed

available-hardware.mdx

+3-2
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ We have the following graphics cards available on the platform:
1717
| --------------------------------------------------------------------------------------------------- | :------: |------ | :-------------------: | :-------------------: |
1818
| [NVIDIA H100](https://www.nvidia.com/en-us/data-center/h100/) | Special Request | 80GB | Enterprise | [Coreweave, AWS]
1919
| [NVIDIA A100](https://www.nvidia.com/en-us/data-center/a100/) | Special Request | 80GB | Standard | [Coreweave, AWS]
20+
| [NVIDIA A100_80GB](https://www.nvidia.com/en-us/data-center/a100/) | AMPERE_A100 | 80GB | Standard | [Coreweave, AWS]
2021
| [NVIDIA A100_40GB](https://www.nvidia.com/en-us/data-center/a100/) | AMPERE_A100 | 40GB | Standard | [Coreweave, AWS]
2122
| [NVIDIA A10](https://www.nvidia.com/en-us/data-center/a100/) | AMPERE_A10 | 24GB | Standard | [AWS]
2223
| [NVIDIA L4](https://www.nvidia.com/en-us/data-center/a100/) | AMPERE_L4 | 24GB | Standard | [AWS]
@@ -25,8 +26,8 @@ We have the following graphics cards available on the platform:
2526
| [NVIDIA RTX A4000](https://www.nvidia.com/en-us/design-visualization/rtx-a4000/) | AMPERE_A4000 | 16GB | Hobby | [Coreweave]
2627
| [NVIDIA Quadro RTX 5000](https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/quadro-product-literature/quadro-rtx-5000-data-sheet-us-nvidia-704120-r4-web.pdf) | TURING_5000 | 16GB | Hobby | [Coreweave]
2728
| [NVIDIA Quadro RTX 4000](https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/quadro-product-literature/quadro-rtx-4000-datasheet-us-nvidia-1060942-r2-web.pdf) | TURING_4000 | 8GB | Hobby | [Coreweave]
28-
| [AWS INFERENTIA](https://www.nvidia.com/en-us/data-center/a100/) | INF2 | 32GB | Hobby | [AWS]
29-
| [AWS TRANIUM](https://www.nvidia.com/en-us/data-center/a100/) | TRN1 | 32GB | Hobby | [AWS]
29+
| [AWS INFERENTIA](https://aws.amazon.com/machine-learning/inferentia/) | INF2 | 32GB | Hobby | [AWS]
30+
| [AWS TRANIUM](https://aws.amazon.com/machine-learning/trainium/) | TRN1 | 32GB | Hobby | [AWS]
3031

3132

3233
These GPUs can be selected using the `--gpu` flag when deploying your model on Cortex or can be specified in your `cerebrium.toml`.

cerebrium/getting-started/quickstart.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ description: "Get up and running with your first deployed model on Cerebrium"
66
The fastest way to get started developing a Cerebrium deployment is to set up a template project using the `cerebrium init` command below. This will create a folder with all the necessary files to get you started. You can then add your code and deploy it to Cerebrium.
77

88
```bash
9-
cerebrium init first-project --name=cerebrium-app
9+
cerebrium init first-project
1010
```
1111

1212
Currently, our implementation has five components:

0 commit comments

Comments
 (0)