Skip to content
View uyzhang's full-sized avatar
🌏
🌏
  • Haidian district, Beijing
  • 08:43 - 8h ahead

Highlights

  • Pro

Block or report uyzhang

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
uyzhang/README.md

Popular repositories Loading

  1. yolov5_prune yolov5_prune Public

    YOLOv5 pruning on COCO Dataset

    Python 83 9

  2. CLIP-RC CLIP-RC Public

    PyTorch implementation of the paper : Exploring Regional Clues in CLIP for Zero-Shot Semantic Segmentation.

    Python 48 1

  3. JCLIP JCLIP Public

    Python 12 3

  4. Cluster-Adapter Cluster-Adapter Public

    Python 8 1

  5. uyzhang uyzhang Public

    2

  6. jittor jittor Public

    Forked from Jittor/jittor

    Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.

    Python

536 contributions in the last year

Contribution Graph
Day of Week April May June July August September October November December January February March
Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Less
No contributions.
Low contributions.
Medium-low contributions.
Medium-high contributions.
High contributions.
More

Activity overview

Contributed to Jittor/jittor, uyzhang/CLIP-RC, uyzhang/JCLIP and 8 other repositories
Loading A graph representing uyzhang's contributions from March 31, 2024 to April 06, 2025. The contributions are 98% commits, 1% issues, 1% pull requests, 0% code review.   Code review 1% Issues 1% Pull requests 98% Commits

Contribution activity

April 2025

Created an issue in vllm-project/vllm that received 1 comment

[Performance]: LLM Offline Inference Slowing Down Over Time

Proposal to improve performance I have encountered an issue with using vllm for offline LLM inference. Initially, the inference runs smoothly with …

1 task done
1 comment
5 contributions in private repositories Apr 1 – Apr 2
Loading