From efcee51ffb291dc8760ff10e7b806ff496c40248 Mon Sep 17 00:00:00 2001
From: Michael Louis <michaellouis157@gmail.com>
Date: Tue, 28 May 2024 07:21:13 -0400
Subject: [PATCH] Updated titles

---
 cerebrium/deployments/async-functions.mdx     | 39 -------------------
 cerebrium/deployments/long-running-tasks.mdx  |  2 +-
 cerebrium/development/serve.mdx               |  2 +-
 .../environments/multi-gpu-inferencing.mdx    |  2 +-
 mint.json                                     |  1 -
 5 files changed, 3 insertions(+), 43 deletions(-)
 delete mode 100644 cerebrium/deployments/async-functions.mdx

diff --git a/cerebrium/deployments/async-functions.mdx b/cerebrium/deployments/async-functions.mdx
deleted file mode 100644
index 93ecffa8..00000000
--- a/cerebrium/deployments/async-functions.mdx
+++ /dev/null
@@ -1,39 +0,0 @@
----
-title: "(outdated) Async Functionality"
----
-
-<Note>You are now able to write `async` functions in your applications.</Note>
-
-Unfortunately Cerebrium doesn't "properly" support async functionality however please see below how you can implement something similar using Cortex. Please
-let our team know you would like the ability to use async functionality and your use case so we can add it to our roadmap.
-
-The main reason Cortex, doesn't support async functionality is because our **predict** function is executed synchronously. This means that you can use async
-functionality throughout your code however, when it gets to the predict functionality, it needs to be executed synchronously.
-
-For example you can implement the following below:
-
-```python
-from asyncio import (
-    new_event_loop,
-    set_event_loop,
-    create_task,
-    gather,
-    run,
-)
-
-def predict(item, run_id, logger):
-    loop = new_event_loop()
-    set_event_loop(loop)
-    first_model = loop.create_task(predict_first_model())
-    second_model = loop.create_task(predict_second_model())
-    tasks = gather(first_model, second_model)
-    results = loop.run_until_complete(tasks)
-
-    return results
-```
-
-Essentially what we are doing above is creating an event loop which is responsible for executing coroutines and scheduling callbacks.
-We then run two separate async functions on the same loop since we would like both these tasks to finish. If this is not the case, you can create multiple different
-loops. We then use the 'run_until_complete' function to wait until both functions have returned. Lastly we return the results from the two predict functions.
-
-The above code converts asynchronous code to run synchronously.
diff --git a/cerebrium/deployments/long-running-tasks.mdx b/cerebrium/deployments/long-running-tasks.mdx
index 9a21a71c..f760877a 100644
--- a/cerebrium/deployments/long-running-tasks.mdx
+++ b/cerebrium/deployments/long-running-tasks.mdx
@@ -1,5 +1,5 @@
 ---
-title: "(Unavailable) Long Running Tasks"
+title: "Long Running Tasks"
 ---
 
 <Note>This feature is currently *unavailable* in the v4 API,</Note>
diff --git a/cerebrium/development/serve.mdx b/cerebrium/development/serve.mdx
index 60477515..5ad434fb 100644
--- a/cerebrium/development/serve.mdx
+++ b/cerebrium/development/serve.mdx
@@ -1,5 +1,5 @@
 ---
-title: (Unavailable) Code hot-reloading
+title: Code hot-reloading
 description: Use the `cerebrium serve` command to rapidly iterate on your code
 ---
 
diff --git a/cerebrium/environments/multi-gpu-inferencing.mdx b/cerebrium/environments/multi-gpu-inferencing.mdx
index 0ead3277..f71e9f2f 100644
--- a/cerebrium/environments/multi-gpu-inferencing.mdx
+++ b/cerebrium/environments/multi-gpu-inferencing.mdx
@@ -1,5 +1,5 @@
 ---
-title: (partly available) Multi-GPU Inferencing
+title: Multi-GPU Inferencing
 description: Tips and tricks for multi-GPU inferencing.
 ---
 
diff --git a/mint.json b/mint.json
index 3c23beff..8ebadbcc 100644
--- a/mint.json
+++ b/mint.json
@@ -98,7 +98,6 @@
       "group": "Deployments",
       "pages": [
         "cerebrium/deployments/long-running-tasks",
-        "cerebrium/deployments/async-functions",
         "cerebrium/deployments/ci-cd"
       ]
     },