Skip to content

Conversation

@Sneakr
Copy link

@Sneakr Sneakr commented Dec 18, 2025

Adds (If True, set alpha ~ sqrt(r) for correct scaling) to rslora comments for clarify.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Sneakr, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing the clarity and usability of the codebase by updating documentation comments. Specifically, it provides explicit instructions for setting the alpha parameter when rslora is activated, ensuring users correctly configure the scaling for rank-stabilized LoRA. This change improves the developer experience by offering immediate, context-sensitive guidance.

Highlights

  • Improved Documentation for rslora: The comments for the use_rslora parameter have been updated across numerous notebooks and Python scripts to include crucial guidance: (If True, set alpha ~ sqrt(r) for correct scaling). This clarifies the recommended scaling for rank-stabilized LoRA when enabled.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enhances the clarity of the rslora configuration comment across a large number of Jupyter notebooks and Python scripts. The change adds a helpful tip about setting alpha when use_rslora is enabled and standardizes the term to 'rank-stabilized LoRA'. While the change is beneficial, the resulting comment makes many lines exceed the recommended length, impacting readability. I've added suggestions to shorten the comment for better adherence to style guidelines.

" use_gradient_checkpointing = \"unsloth\", # True or \"unsloth\" for very long context\n",
" random_state = 3407,\n",
" use_rslora = False, # We support rank stabilized LoRA\n",
" use_rslora = False, # We support rank-stabilized LoRA (If True, set alpha ~ sqrt(r) for correct scaling)\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The code within this notebook cell string contains a very long line, which can harm readability when viewed as code. To improve this, I suggest shortening the comment. This suggestion applies to all other notebooks modified in this PR.

Suggested change
" use_rslora = False, # We support rank-stabilized LoRA (If True, set alpha ~ sqrt(r) for correct scaling)\n",
" use_rslora = False, # RsLoRA: For correct scaling, set alpha ~ sqrt(r) if True.\n",

use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
random_state = 3407,
use_rslora = False, # We support rank stabilized LoRA
use_rslora = False, # We support rank-stabilized LoRA (If True, set alpha ~ sqrt(r) for correct scaling)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This line is quite long (over 110 characters), which exceeds the common recommendation of 88 characters per line from style guides like PEP 8 / black. Long lines can be harder to read. To improve readability, I suggest shortening the comment. This suggestion is applicable to all other Python scripts modified in this PR.

Suggested change
use_rslora = False, # We support rank-stabilized LoRA (If True, set alpha ~ sqrt(r) for correct scaling)
use_rslora = False, # RsLoRA: For correct scaling, set alpha ~ sqrt(r) if True.

@Sneakr
Copy link
Author

Sneakr commented Dec 18, 2025

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request improves comment clarity for rslora and loftq_config parameters across a large number of notebooks and Python scripts. The new comment for rslora is a significant improvement, providing useful guidance on its usage. The changes are consistent and improve code readability and maintainability. However, a critical issue has been identified: the loftq_update_log.txt file, which appears to be a local script execution log, has been added. This file contains a local file path, posing a security concern, and should be removed. For better clarity, I also suggest updating the pull request title and description to reflect all the changes made, including the updates to loftq_config comments.

Comment on lines 1 to 3
Comment updated to: We support LoftQ
Run at: 2025-12-18T02:47:41
Root: C:\Users\HZ\Documents\Dev\unsloth\notebooks
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-critical critical

This log file appears to be a generated artifact from a local script and should not be committed to the repository.

It contains sensitive information, such as the local file path on your machine on line 3: C:\\Users\\HZ\\Documents\\Dev\\unsloth\\notebooks. Exposing such information is a security risk.

Please remove this file from the pull request. To prevent this from happening again, consider adding loftq_update_log.txt or a more general pattern like *.txt or *.log to your .gitignore file.

@Sneakr
Copy link
Author

Sneakr commented Dec 18, 2025

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request systematically improves the clarity of configuration options across a large number of notebooks and Python scripts. The primary change enhances the comment for the use_rslora parameter, providing crucial information about scaling alpha. Additionally, the changes include stylistic improvements such as standardizing comments for loftq_config and correcting spacing and alignment, which significantly boosts code readability and maintainability. The updates are consistent and well-executed. Great work!

@Sneakr
Copy link
Author

Sneakr commented Dec 22, 2025

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to enhance the clarity of the rslora comment across numerous notebooks and Python scripts. While the change is an improvement, there's a discrepancy between the comment added and the one mentioned in the pull request description. The description suggests (If True, set alpha ~ sqrt(r) for correct scaling), which provides a clearer rule for setting the alpha parameter. The implemented comment uses a hardcoded range ~16-32, which is less informative. I've added suggestions to a notebook and a Python script to illustrate how to use the more descriptive comment. This change should be applied to all modified files.

" random_state = 3407,\n",
" use_rslora = False, # We support rank stabilized LoRA\n",
" loftq_config = None, # And LoftQ\n",
" use_rslora = False, # If True, set alpha to ~16–32\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The new comment is an improvement, but the pull request description mentions (If True, set alpha ~ sqrt(r) for correct scaling), which is more descriptive and helpful than the hardcoded range ~16-32. It explains the relationship between alpha and r, providing better guidance. It's recommended to use this more informative comment across all modified files.

Suggested change
" use_rslora = False, # If True, set alpha to ~16–32\n",
" use_rslora = False, # If True, set alpha ~ sqrt(r) for correct scaling\n",

random_state = 3407,
use_rslora = False, # We support rank stabilized LoRA
loftq_config = None, # And LoftQ
use_rslora = False, # If True, set alpha to ~16–32
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

As with the notebook files, the comment here could be more descriptive. The pull request description suggests (If True, set alpha ~ sqrt(r) for correct scaling), which is more informative than the hardcoded range ~16-32. This will help users understand how to set the alpha parameter correctly.

Suggested change
use_rslora = False, # If True, set alpha to ~16–32
use_rslora = False, # If True, set alpha ~ sqrt(r) for correct scaling

@Sneakr
Copy link
Author

Sneakr commented Dec 22, 2025

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enhances the clarity of the use_rslora parameter's comment across a large number of notebooks and Python scripts. The new comment provides practical, concrete ranges for the lora_alpha (α) hyperparameter, which is a great improvement for users. The changes also include some nice stylistic cleanups, like adding PEP8-compliant spacing and making related comments more consistent.

I've left a couple of specific comments with a suggestion to also include the α ≈ sqrt(r) rule of thumb, as mentioned in your PR description. This would provide valuable context by linking the practical advice to the theoretical foundation from the Rank-Stabilized LoRA paper. This suggestion applies to all files changed in this PR.

Additionally, please note the discrepancy between the PR description, which mentions adding (If True, set alpha ~ sqrt(r) for correct scaling), and the actual change. Updating the PR description to match the final implementation would improve clarity. There's also a small typo in the description ('clarify' instead of 'clarity').

Overall, this is a valuable change that improves the usability and consistency of these examples.

" random_state = 3407,\n",
" use_rslora = False, # We support rank stabilized LoRA\n",
" loftq_config = None, # And LoftQ\n",
" use_rslora = False, # If True, use α≈16–32 for high ranks; α≈4–16 for low ranks.\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The new comment provides helpful, concrete ranges for α. To provide even more context for users, consider also mentioning the α ≈ sqrt(r) rule of thumb from the Rank-Stabilized LoRA paper, as you noted in the pull request description. This would connect the practical advice with the underlying principle. This suggestion can be applied to all similar changes in the notebooks, preserving the original value of use_rslora.

Suggested change
" use_rslora = False, # If True, use α≈16–32 for high ranks; α≈4–16 for low ranks.\n",
" use_rslora = False, # If True, use α ≈ sqrt(r). Eg: α≈16–32 for high ranks; α≈4–16 for low ranks.\n",

random_state = 3407,
use_rslora = False, # We support rank stabilized LoRA
loftq_config = None, # And LoftQ
use_rslora = False, # If True, use α≈16–32 for high ranks; α≈4–16 for low ranks.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The new comment provides helpful, concrete ranges for α. To provide even more context for users, consider also mentioning the α ≈ sqrt(r) rule of thumb from the Rank-Stabilized LoRA paper, as you noted in the pull request description. This would connect the practical advice with the underlying principle. This suggestion can be applied to all similar changes in the Python scripts, preserving the original value of use_rslora.

Suggested change
use_rslora = False, # If True, use α≈16–32 for high ranks; α≈4–16 for low ranks.
use_rslora = False, # If True, use α ≈ sqrt(r). Eg: α≈16–32 for high ranks; α≈4–16 for low ranks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant