Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Torch 2.5 issue with Tensor Parallel Size > 1 #1925

Open
5 tasks done
CortexEdgeUser opened this issue Nov 5, 2024 · 0 comments
Open
5 tasks done

[Bug] Torch 2.5 issue with Tensor Parallel Size > 1 #1925

CortexEdgeUser opened this issue Nov 5, 2024 · 0 comments

Comments

@CortexEdgeUser
Copy link

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

With PyTorch 2.5, tensor parallel size > 1 causes an EOF error; do you know where it might come from?

Reproduction

1|sglang | [2024-11-05 13:52:51 TP2] Init torch distributed begin.
1|sglang | [2024-11-05 13:52:52 TP1] Init torch distributed begin.
1|sglang | [2024-11-05 13:52:52 TP7] Init torch distributed begin.
1|sglang | [2024-11-05 13:52:53 TP4] Init torch distributed begin.
1|sglang | [2024-11-05 13:52:53 TP6] Init torch distributed begin.
1|sglang | [2024-11-05 13:52:54 TP3] Init torch distributed begin.
1|sglang | [2024-11-05 13:52:54 TP0] Init torch distributed begin.
1|sglang | [2024-11-05 13:52:54 TP5] Init torch distributed begin.
1|sglang | Traceback (most recent call last):
1|sglang | File "/root/miniconda3/lib/python3.10/runpy.py", line 196, in _run_module_as_main
1|sglang | return _run_code(code, main_globals, None,
1|sglang | File "/root/miniconda3/lib/python3.10/runpy.py", line 86, in _run_code
1|sglang | exec(code, run_globals)
1|sglang | File "/home/ubuntu/sglang-4/python/sglang/launch_server.py", line 16, in
1|sglang | raise e
1|sglang | File "/home/ubuntu/sglang-4/python/sglang/launch_server.py", line 14, in
1|sglang | launch_server(server_args)
1|sglang | File "/home/ubuntu/sglang-4/python/sglang/srt/server.py", line 436, in launch_server
1|sglang | launch_engine(server_args=server_args)
1|sglang | File "/home/ubuntu/sglang-4/python/sglang/srt/server.py", line 413, in launch_engine
1|sglang | scheduler_pipe_readers[i].recv()
1|sglang | File "/root/miniconda3/lib/python3.10/multiprocessing/connection.py", line 250, in recv
1|sglang | buf = self._recv_bytes()
1|sglang | File "/root/miniconda3/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
1|sglang | buf = self._recv(4)
1|sglang | File "/root/miniconda3/lib/python3.10/multiprocessing/connection.py", line 383, in _recv
1|sglang | raise EOFError
1|sglang | EOFError

Environment

Python: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H100 80GB HBM3
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 550.90.12
PyTorch: 2.5.0+cu124
sglang: 0.3.4.post2
flashinfer: 0.1.6+cu121torch2.4
triton: 3.1.0
transformers: 4.46.0
requests: 2.32.3
tqdm: 4.66.5
numpy: 1.26.4
aiohttp: 3.10.10
fastapi: 0.115.4
hf_transfer: 0.1.8
huggingface_hub: 0.26.2
interegular: 0.3.3
packaging: 24.1
PIL: 10.4.0
psutil: 6.1.0
pydantic: 2.9.2
uvicorn: 0.32.0
uvloop: 0.21.0
zmq: 26.2.0
vllm: 0.6.3.post1
multipart: 0.0.16
openai: 1.52.2
anthropic: 0.37.1
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 SYS 0-103 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 SYS 0-103 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 SYS 0-103 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 SYS 0-103 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS 104-207 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS 104-207 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS 104-207 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS 104-207 1 N/A
NIC0 SYS SYS SYS SYS SYS SYS SYS SYS X

Legend:

X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_0

Hypervisor vendor: KVM
ulimit soft: 1024

@CortexEdgeUser CortexEdgeUser changed the title [Bug] Torch 2.5 issue with Tensor Parallel Size > 8 [Bug] Torch 2.5 issue with Tensor Parallel Size > 1 Nov 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant