Skip to content
View simmonssong's full-sized avatar
🎯
Focusing
🎯
Focusing

Highlights

  • Pro

Block or report simmonssong

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. NetX-lab/GoMathL2O-Official Public

    Python 4

336 contributions in the last year

Contribution Graph
Day of Week March April May June July August September October November December January February March
Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Less
No contributions.
Low contributions.
Medium-low contributions.
Medium-high contributions.
High contributions.
More

Activity overview

Contributed to NetX-lab/GoMathL2O-Official, abetlen/llama-cpp-python, ggml-org/llama.cpp and 2 other repositories
Loading A graph representing simmonssong's contributions from March 24, 2024 to March 29, 2025. The contributions are 99% commits, 1% issues, 0% pull requests, 0% code review.   Code review 1% Issues   Pull requests 99% Commits

Contribution activity

March 2025

Created an issue in abetlen/llama-cpp-python that received 3 comments

How to predict a specific length of tokens?

In llama.cpp, --n-predict option is used to set the number of tokens to predict when generating text/ I don't find the binding for that in doc.

3 comments
Opened 1 other issue in 1 repository
QwenLM/Qwen2.5 1 open
Started 4 discussions in 3 repositories
Loading