Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Added initial PR template with directions for doc only changes and squash merges [no ci] #7700

Merged
merged 11 commits into from
Jun 9, 2024
Merged
1 change: 1 addition & 0 deletions .github/PULL_REQUEST_TEMPLATE/pull_request_template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
- [ ] I have read the [contributing guidelines](CONTRIBUTING.md)
14 changes: 14 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Contributing Guidelines

## Checklist

* Make sure your PR follows the [coding guidelines](https://github.com/ggerganov/llama.cpp/blob/master/README.md#coding-guidelines)
* Test your changes using the commands in the [`tests`](tests) folder. For instance, running the `./tests/test-backend-ops` command tests different backend implementations of the GGML library
* Execute [the full CI locally on your machine](ci/README.md) before publishing

## PR formatting

* Please rate the complexity of your PR (i.e. `easy`, `medium`, `hard`). This makes it easier for maintainers to triage the PRs.
mofosyne marked this conversation as resolved.
Show resolved Hide resolved
* If the pull request only contains documentation changes (e.g., updating
READMEs, adding new wiki pages), please add `[no ci]` to the commit title. This will skip unnecessary CI checks and help reduce build times.
* When squashing multiple commits on merge, use the following format for your commit title: `<module> : <commit title> (#<issue_number>)`. For example: `utils : Fix typo in utils.py (#1234)`
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -1088,6 +1088,7 @@ docker run --gpus all -v /path/to/models:/models local/llama.cpp:server-cuda -m

- Contributors can open PRs
- Collaborators can push to branches in the `llama.cpp` repo and merge PRs into the `master` branch
- Collaborators should follow the [PR template](.github/PULL_REQUEST_TEMPLATE/pull_request_template.md) when adding a PR
mofosyne marked this conversation as resolved.
Show resolved Hide resolved
- Collaborators will be invited based on contributions
- Any help with managing issues and PRs is very appreciated!
- Make sure to read this: [Inference at the edge](https://github.com/ggerganov/llama.cpp/discussions/205)
Expand Down