Skip to content

Commit 07c8b83

Browse files
authored
[doc][core][cgraph] Clean up docs (ray-project#51263)
Signed-off-by: Rui Qiao <[email protected]>
1 parent 4a13679 commit 07c8b83

File tree

4 files changed

+5
-3
lines changed

4 files changed

+5
-3
lines changed

doc/source/ray-core/compiled-graph/compiled-graph-api.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ DAG Construction
2020

2121
ray.actor.ActorMethod.bind
2222
ray.dag.DAGNode.with_tensor_transport
23+
ray.experimental.compiled_dag_ref.CompiledDAGRef
2324

2425
Compiled Graph Operations
2526
-------------------------

doc/source/ray-core/compiled-graph/quickstart.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -232,7 +232,7 @@ GPU to GPU communication
232232
------------------------
233233
Ray Compiled Graphs supports NCCL-based transfers of CUDA ``torch.Tensor`` objects, avoiding any copies through Ray's CPU-based shared-memory object store.
234234
With user-provided type hints, Ray prepares NCCL communicators and
235-
operation scheduling ahead of time, avoiding deadlock and `overlapping compute and communication <compiled-graph-overlap>`.
235+
operation scheduling ahead of time, avoiding deadlock and :ref:`overlapping compute and communication <compiled-graph-overlap>`.
236236

237237
Ray Compiled Graph uses `cupy <https://cupy.dev/>`_ under the hood to support NCCL operations.
238238
The cupy version affects the NCCL version. The Ray team is also planning to support custom communicators in the future, for example to support collectives across CPUs or to reuse existing collective groups.
@@ -252,6 +252,7 @@ To support GPU-to-GPU communication with NCCL, wrap the DAG node that contains t
252252
:end-before: __cgraph_nccl_exec_end__
253253

254254
Current limitations include:
255+
255256
* ``torch.Tensor`` and NVIDIA NCCL only
256257
* Support for peer-to-peer transfers. Collective communication operations are coming soon.
257258
* Communication operations are currently done synchronously. :ref:`Overlapping compute and communication <compiled-graph-overlap>` is an experimental feature.

doc/source/ray-core/compiled-graph/ray-compiled-graph.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ Ray Compiled Graph APIs simplify development of high-performance multi-GPU workl
5757

5858
- Sub-millisecond level task orchestration.
5959
- Direct GPU-GPU peer-to-peer or collective communication.
60-
- `Heterogeneous <https://www.youtube.com/watch?v=Mg08QTBILWU>` or MPMD (Multiple Program Multiple Data) execution.
60+
- `Heterogeneous <https://www.youtube.com/watch?v=Mg08QTBILWU>`_ or MPMD (Multiple Program Multiple Data) execution.
6161

6262
More Resources
6363
--------------

doc/source/ray-core/compiled-graph/troubleshooting.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Compiled Graph is a new feature and has some limitations:
5252

5353
- Collective operations
5454

55-
- Compiled Graph only supports the all-reduce collective operation for now.
55+
- For GPU to GPU communication, Compiled Graph only supports peer-to-peer transfers. Collective communication operations are coming soon.
5656

5757
Keep an eye out for additional features in future Ray releases:
5858
- Support better queuing of DAG inputs, to enable more concurrent executions of the same DAG.

0 commit comments

Comments
 (0)