Support for returning Logits and Calculating Perplexity During Model Evaluation? #1314
-
Hello SGLang Community, I'm currently exploring the capabilities of the SGLang inference framework for LLMs and I have a couple of questions regarding model evaluation:
If these features are not currently supported, are there any plans to include them in future updates? Any guidance or suggestions on how to implement these functionalities using the current framework would also be greatly appreciated. Thank you! Best regards, willhe. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
They are well supported. Some related docs:
|
Beta Was this translation helpful? Give feedback.
They are well supported. Some related docs:
sglang/docs/en/sampling_params.md
Lines 23 to 28 in 5ab9418
sglang/test/srt/test_openai_server.py
Line 72 in 5ab9418
https://github.com/sgl-…