Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add RTD #321

Merged
merged 29 commits into from
Feb 15, 2024
Merged

Add RTD #321

merged 29 commits into from
Feb 15, 2024

Conversation

mrwyattii
Copy link
Contributor

No description provided.

@mrwyattii mrwyattii marked this pull request as ready for review February 14, 2024 00:07
@mrwyattii mrwyattii requested a review from awan-10 as a code owner February 14, 2024 00:07
parallelism, and high-performance CUDA kernels to support fast high throughput
text-generation with LLMs. The latest version of MII delivers up to 2.5 times
higher effective throughput compared to leading systems such as vLLM. For
detailed performance results please see our `DeepSpeed-FastGen release blog
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI this is the same hyperlink as above in the same paragraph, not sure if one is meant to point to a different link?

@mrwyattii mrwyattii merged commit 4b14e8e into main Feb 15, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants