LMTools is a powerful CLI tool that leverages large language models to automate development workflows. It generates conventional commit messages from staged changes and creates comprehensive issue/pull request documentation from commit history and code diffs.
- Conventional Commit Compliance: Generates messages following the Conventional Commits specification
- Smart Diff Analysis: Analyzes staged changes to understand code modifications
- Flexible Options: Optional scope and footer inclusion for enhanced commit messages
- Direct Integration: Commit changes directly or copy to clipboard
- Size-Aware Processing: Intelligent handling of large diffs with configurable word limits
- Multi-Source Input: Generate from commit messages, code diffs, or both
- Template-Based: Uses comprehensive templates for consistent documentation
- Multiple Outputs: Generate issues, pull requests, or combined documentation
- Advanced Diff Processing: Smart quota allocation for large codebases
- Context Enhancement: Add custom context to guide generation
- DeepSeek:
deepseek-reasoner
,deepseek-chat
- OpenRouter: Access to multiple models including:
- Meta Llama (3.1 405B, 3.3 70B)
- Qwen (72B, 32B)
- GPT-4
- Claude 3.5 Sonnet
- Fireworks:
deepseek-r1
,qwen2.5-72b
- LFS-Aware Processing: Special handling for Git LFS files with reduced quota allocation
- Quota Management: Configurable word limits with proportional or average allocation modes
- Token Usage Tracking: Monitor API usage and costs
- Error Handling: Robust error handling with detailed logging
- Interactive CLI: User-friendly command-line interface with sensible defaults
- Python 3.12+
- Git repository with staged changes or commit history
- API keys for at least one supported LLM provider
git clone https://github.com/yourusername/lmtools.git
cd lmtools
# Using venv
python -m venv .venv
source .venv/bin/activate # Linux/macOS
# .venv\Scripts\activate # Windows
# Or using uv (recommended)
uv venv
source .venv/bin/activate
# Or using conda/mamba
conda create -n lmtools python=3.12
conda activate lmtools
# Using pip
pip install -r requirements.txt
# Or using uv
uv pip install -r requirements.txt
Create a lmtools.env
file in the project root:
# DeepSeek API
DEEPSEEK_API_KEY="your-deepseek-api-key"
# OpenRouter API
OPENROUTER_API_KEY="your-openrouter-api-key"
# Fireworks API
FIREWORKS_API_KEY="your-fireworks-api-key"
cp mappings.json.example mappings.json
Then edit mappings.json
to set your preferred default provider, models, temperatures, and token limits. If mappings.json
is absent, LMTools falls back to sensible defaults and prints a warning.
Add to your shell configuration (~/.bashrc
, ~/.zshrc
, etc.):
# For venv/conda
alias lm='/path/to/lmtools/.venv/bin/python /path/to/lmtools/lmtools/main.py'
# For uv
alias lm='source /path/to/lmtools/.venv/bin/activate && python /path/to/lmtools/lmtools/main.py; deactivate'
Reload your shell:
source ~/.bashrc # or ~/.zshrc
# Stage your changes
git add .
# Run LMTools
lm
# Select option 1: Generate commit message
# Or run directly
python lmtools/main.py
# From your feature branch
lm
# Select option 2: Generate issue, pull request or both
- Entry point (
lmtools/main.py
): Shows a menu and dispatches to the generator you pick.- Option 1 β
CommitGenerator
(lmtools/lmtools/generators/commitgen.py
) - Option 2 β
IssuePullRequestGenerator
(lmtools/lmtools/generators/issueprgen.py
)
- Option 1 β
- Shared core (
BaseGenerator
): Both generators extendBaseGenerator
(lmtools/lmtools/generators/base.py
) which centralizes:- Provider/model selection via
mappings.json
(lmtools/lmtools/config/mappings.py
) - Env-backed provider configs via
lmtools/lmtools/config/config.py
andlmtools.env
- LLM client init (OpenRouter, Fireworks, DeepSeek)
- Git helpers, enhanced diff processing, LFS-aware quotas, clipboard utilities
- Provider/model selection via
- Prompts and templates (text files):
- Commit:
lmtools/lmtools/prompts/commitgen_prompt.txt
- Issue/PR:
lmtools/lmtools/prompts/issuepr_prompt.txt
+lmtools/lmtools/templates/
- Commit:
- Common pipeline (shared between commit and issue/PR):
- Collect git inputs
- Apply enhanced diff processing and quota allocation before building messages
- Load prompt/template text files and build the messages
- Select provider/model, init client, invoke LLM
- Parse and present output, then act (commit/copy)
- What differs:
- Inputs: commit uses staged diff; issue/PR uses recent diffs and can include commit messages with a base branch
- Prompts/templates: commit vs issue/PR prompt + templates
- Actions: commit flow can run
git commit
; issue/PR outputs are copied for platform use
flowchart TD
A[User runs lmtools] --> B{main menu}
B -->|1 Commit| C[CommitGenerator]
B -->|2 Issue/PR/Both| D[IssuePullRequestGenerator]
C --> E[BaseGenerator utilities]
D --> E[BaseGenerator utilities]
E --> F[Load provider mappings: mappings.json]
E --> G[Load env config: lmtools.env]
E --> H[Collect git inputs]
H --> I[Enhanced diff processing and quotas]
I --> M[Build messages]
C --> J[commit prompt]
D --> K[issue/pr prompt]
D --> L[issue/pr templates]
J --> M[Build messages]
K --> M[Build messages]
L --> M[Build messages]
F --> N[Init LLM client]
G --> N[Init LLM client]
M --> O[Invoke LLM]
N --> O[Invoke LLM]
O --> P[Parse and display]
P --> Q[Commit or copy]
-
Stage Your Changes
git add .
-
Run LMTools
lm # Select: 1. Generate commit message
-
Configure Options
- Choose LLM provider (DeepSeek, OpenRouter, Fireworks)
- Select model and parameters
- Configure scope and footer inclusion
- Set word count limits for large diffs
-
Review and Commit
- Review the generated message
- Choose to commit directly or copy to clipboard
Example Output:
feat(parser): add array parsing capability
- Implement recursive descent parser for JSON arrays
- Add nested array support with type validation
- Include error handling for malformed structures
- Update documentation with usage examples
-
Navigate to Feature Branch
git checkout feature-branch
-
Run LMTools
lm # Select: 2. Generate issue, pull request or both
-
Configure Generation
- Select output type (Issue, PR, or Both)
- Choose input source (commit messages, diffs, or both)
- Set word count limits and quota mode
- Add optional context
-
Review Output
- Review generated documentation
- Copy to clipboard for use in GitHub/GitLab
Example PR Output:
## Title: Add Array Parsing Capability to JSON Parser
## Related Issue
Issue: #123
## Change Overview
### Overview
This PR implements comprehensive array parsing functionality for the JSON parser, enabling support for nested arrays and complex data structures. The solution addresses performance bottlenecks in large array processing while maintaining backward compatibility.
### Scope
Main changes:
- Enhanced parser with recursive descent algorithm
- Added type validation and error handling
- Updated test suite with comprehensive coverage
- Improved documentation and examples
## Technical Details
### Architecture Changes
- Introduced ArrayParser class with visitor pattern
- Modified ParserFactory to support array parsing
- Updated error handling with specific array-related exceptions
### Code Changes
- New `ArrayParser` class with recursive descent implementation
- Enhanced `ParserFactory` with array detection logic
- Updated `JSONValidator` with array-specific validation rules
- Added comprehensive test coverage for edge cases
### Design Decisions
- Chose recursive descent over iterative approach for better readability
- Implemented visitor pattern for extensible parsing logic
- Used builder pattern for complex array construction
The tool supports multiple LLM providers with configurable settings. Edit mappings.json
to customize:
{
"providers": {
"deepseek": {
"models": {
"deepseek-reasoner": {
"model_name": "deepseek-reasoner"
}
},
"default_model": "deepseek-reasoner",
"max_tokens": 8000,
"temperature": 0.2,
"max_retries": 3
}
}
}
DEEPSEEK_API_KEY
: Your DeepSeek API keyOPENROUTER_API_KEY
: Your OpenRouter API keyFIREWORKS_API_KEY
: Your Fireworks API key
- Word Count Limits: Configure maximum words for diff processing
- Quota Modes: Choose between proportional or average allocation
- LFS Handling: Automatic detection and reduced quota for LFS files
- Temperature & Tokens: Adjust model parameters per request
For repositories with large changes, LMTools offers intelligent diff processing:
# When diff exceeds 5000 words, enhanced processing is offered
# Choose between:
# 1. Average LFS-aware mode (LFS files get 25% of regular file quota)
# 2. Proportional mode (quota based on diff sizes)
- Proportional Mode: Allocates word quota based on actual diff sizes
- Average LFS-Aware Mode: Equal allocation with LFS file reduction
- Configurable Limits: Set custom word limits per operation
Generate documentation from different input sources:
- Commit Messages Only: Use commit history for context
- Diffs Only: Analyze code changes directly
- Both: Combine commit messages and diffs for comprehensive documentation
LMTools provides detailed token usage information:
Prompt Tokens: 1,234
Completion Tokens: 567
Total Token Usage: 1,801
Monitor your API usage to manage costs effectively.
No staged changes detected
# Ensure you have staged changes
git add .
git status # Verify changes are staged
API key not found
# Check your lmtools.env file
cat lmtools.env
# Ensure API keys are properly set
Large diff processing
# For very large diffs, consider:
# 1. Using enhanced diff processing
# 2. Increasing word count limits
# 3. Using proportional quota mode
Enable detailed logging by modifying the logging level in base.py
:
logging.basicConfig(level=logging.DEBUG)
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
# Install development dependencies
pip install -r requirements.txt
# Run tests (when available)
python -m pytest
# Format code
black lmtools/
This project is licensed under the MIT License. See the LICENSE file for details.
For support and questions:
- Open an issue on GitHub
- Check the troubleshooting section above
- Review the configuration documentation
Made with β€οΈ for developers who love clean, automated workflows.