Skip to content

Commit

Permalink
Model choice option to support GPT-4 (#151)
Browse files Browse the repository at this point in the history
* Model choice option to support GPT-4
* Added default model to config file
---------
Co-authored-by: Levi Purdy <[email protected]>
  • Loading branch information
TheR1D authored Apr 9, 2023
1 parent d12d72b commit 2b7067f
Show file tree
Hide file tree
Showing 7 changed files with 71 additions and 35 deletions.
48 changes: 26 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# Shell GPT
A command-line productivity tool powered by OpenAI's ChatGPT (GPT-3.5). As developers, we can leverage ChatGPT capabilities to generate shell commands, code snippets, comments, and documentation, among other things. Forget about cheat sheets and notes, with this tool you can get accurate answers right in your terminal, and you'll probably find yourself reducing your daily Google searches, saving you valuable time and effort.
A command-line productivity tool powered by OpenAI's GPT-3.5 model. As developers, we can leverage ChatGPT capabilities to generate shell commands, code snippets, comments, and documentation, among other things. Forget about cheat sheets and notes, with this tool you can get accurate answers right in your terminal, and you'll probably find yourself reducing your daily Google searches, saving you valuable time and effort.

<div align="center">
<img src="https://i.ibb.co/nzPqnVd/sgpt-v0-8.gif" width="800"/>
</div>

## Installation
```shell
pip install shell-gpt==0.8.5
pip install shell-gpt==0.8.6
```
You'll need an OpenAI API key, you can generate one [here](https://beta.openai.com/account/api-keys).

Expand Down Expand Up @@ -249,29 +249,33 @@ CACHE_LENGTH=100
CACHE_PATH=/tmp/shell_gpt/cache
# Request timeout in seconds.
REQUEST_TIMEOUT=60
# Default OpenAI model to use.
DEFAULT_MODEL=gpt-3.5-turbo
```

### Full list of arguments
```shell
╭─ Arguments ─────────────────────────────────────────────────────────────────────────────────────────────╮
│ prompt [PROMPT] The prompt to generate completions for. │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────────╮
│ --temperature FLOAT RANGE [0.0<=x<=1.0] Randomness of generated output. [default: 0.1] │
│ --top-probability FLOAT RANGE [0.1<=x<=1.0] Limits highest probable tokens (words). [default: 1.0] │
│ --editor Open $EDITOR to provide a prompt. [default: no-editor] │
│ --cache Cache completion results. [default: cache] │
│ --help Show this message and exit. │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Chat Options ──────────────────────────────────────────────────────────────────────────────────────────╮
│ --chat TEXT Follow conversation with id (chat mode). [default: None] │
│ --show-chat TEXT Show all messages from provided chat id. [default: None] │
│ --list-chat List all existing chat ids. [default: no-list-chat] │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Assistance Options ────────────────────────────────────────────────────────────────────────────────────╮
│ --shell -s Generate and execute shell commands. │
│ --code Generate only code. [default: no-code] │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯
```text
╭─ Arguments ────────────────────────────────────────────────────────────────────────────────────────────────╮
│ prompt [PROMPT] The prompt to generate completions for. │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ──────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --model [gpt-3.5-turbo|gpt-4|gpt-4-32k] OpenAI GPT model to use. [default: gpt-3.5-turbo] │
│ --temperature FLOAT RANGE [0.0<=x<=1.0] Randomness of generated output. [default: 0.1] │
│ --top-probability FLOAT RANGE [0.1<=x<=1.0] Limits highest probable tokens (words). [default: 1.0] │
│ --editor Open $EDITOR to provide a prompt. [default: no-editor] │
│ --cache Cache completion results. [default: cache] │
│ --help Show this message and exit. │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Assistance Options ───────────────────────────────────────────────────────────────────────────────────────╮
│ --shell -s Generate and execute shell commands. │
│ --code --no-code Generate only code. [default: no-code] │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Chat Options ─────────────────────────────────────────────────────────────────────────────────────────────╮
│ --chat TEXT Follow conversation with id, use "temp" for quick session. [default: None] │
│ --repl TEXT Start a REPL (Read–eval–print loop) session. [default: None] │
│ --show-chat TEXT Show all messages from provided chat id. [default: None] │
│ --list-chat List all existing chat ids. [default: no-list-chat] │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
```

## Docker
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
# pylint: disable=consider-using-with
setup(
name="shell_gpt",
version="0.8.5",
version="0.8.6",
packages=find_packages(),
install_requires=[
"typer~=0.7.0",
Expand Down
11 changes: 9 additions & 2 deletions sgpt/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,15 +19,19 @@
# Click is part of typer.
from click import MissingParameter, BadArgumentUsage
from sgpt import ChatHandler, DefaultHandler, ReplHandler, OpenAIClient, config
from sgpt.utils import get_edited_prompt, run_command
from sgpt.utils import get_edited_prompt, run_command, ModelOptions


def main( # pylint: disable=too-many-arguments
def main( # pylint: disable=too-many-arguments,too-many-locals
prompt: str = typer.Argument(
None,
show_default=False,
help="The prompt to generate completions for.",
),
model: ModelOptions = typer.Option(
ModelOptions(config.get("DEFAULT_MODEL")).value,
help="OpenAI GPT model to use.",
),
temperature: float = typer.Option(
0.1,
min=0.0,
Expand Down Expand Up @@ -103,6 +107,7 @@ def main( # pylint: disable=too-many-arguments
# Will be in infinite loop here until user exits with Ctrl+C.
ReplHandler(client, repl, shell, code).handle(
prompt,
model=model.value,
temperature=temperature,
top_probability=top_probability,
chat_id=repl,
Expand All @@ -112,6 +117,7 @@ def main( # pylint: disable=too-many-arguments
if chat:
full_completion = ChatHandler(client, chat, shell, code).handle(
prompt,
model=model.value,
temperature=temperature,
top_probability=top_probability,
chat_id=chat,
Expand All @@ -120,6 +126,7 @@ def main( # pylint: disable=too-many-arguments
else:
full_completion = DefaultHandler(client, shell, code).handle(
prompt,
model=model.value,
temperature=temperature,
top_probability=top_probability,
caching=cache,
Expand Down
1 change: 0 additions & 1 deletion sgpt/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@

class OpenAIClient:
cache = Cache(CACHE_LENGTH, CACHE_PATH)
# chat_cache = ChatCache(CHAT_CACHE_LENGTH, CHAT_CACHE_PATH)

def __init__(self, api_host: str, api_key: str) -> None:
self.api_key = api_key
Expand Down
24 changes: 15 additions & 9 deletions sgpt/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@

from click import UsageError

from sgpt.utils import ModelOptions

CONFIG_FOLDER = os.path.expanduser("~/.config")
CONFIG_PATH = Path(CONFIG_FOLDER) / "shell_gpt" / ".sgptrc"
# TODO: Refactor it to CHAT_STORAGE_PATH.
Expand All @@ -13,15 +15,7 @@
CHAT_CACHE_LENGTH = 100
CACHE_LENGTH = 100
REQUEST_TIMEOUT = 60
EXPECTED_KEYS = (
"OPENAI_API_HOST",
"OPENAI_API_KEY",
"CHAT_CACHE_LENGTH",
"CHAT_CACHE_PATH",
"CACHE_LENGTH",
"CACHE_PATH",
"REQUEST_TIMEOUT",
)
DEFAULT_MODEL = ModelOptions.GPT3.value
config = {}


Expand All @@ -43,6 +37,7 @@ def init() -> None:
config["CACHE_LENGTH"] = os.getenv("CACHE_LENGTH", str(CACHE_LENGTH))
config["CACHE_PATH"] = os.getenv("CACHE_PATH", str(CACHE_PATH))
config["REQUEST_TIMEOUT"] = os.getenv("REQUEST_TIMEOUT", str(REQUEST_TIMEOUT))
config["DEFAULT_MODEL"] = os.getenv("DEFAULT_MODEL", str(DEFAULT_MODEL))
_write()

with open(CONFIG_PATH, "r", encoding="utf-8") as file:
Expand All @@ -51,6 +46,17 @@ def init() -> None:
key, value = line.strip().split("=")
config[key] = value

# TODO: Refactor it this module to Config class.
# New features may add new keys to existing config.
if "DEFAULT_MODEL" not in config:
append("DEFAULT_MODEL", str(DEFAULT_MODEL))
init()


def append(key: str, value: str) -> None:
with open(CONFIG_PATH, encoding="utf-8", mode="a") as file:
file.write(f"{key}={value}\n")


def get(key: str) -> str:
# Prioritize ENV variables.
Expand Down
6 changes: 6 additions & 0 deletions sgpt/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,12 @@
from click import BadParameter


class ModelOptions(str, Enum):
GPT3 = "gpt-3.5-turbo"
GPT4 = "gpt-4"
GPT4_32K = "gpt-4-32k"


class CompletionModes(Enum):
NORMAL = "normal"
SHELL = "shell"
Expand Down
14 changes: 14 additions & 0 deletions tests/integration_tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,13 @@
from time import sleep
from pathlib import Path
from unittest import TestCase
from unittest.mock import patch, ANY
from tempfile import NamedTemporaryFile
from uuid import uuid4

import typer
from typer.testing import CliRunner

from sgpt import main, config

runner = CliRunner()
Expand Down Expand Up @@ -293,3 +295,15 @@ def test_zsh_command(self):
# but it is not part of the result.stdout.
# assert "command not found" not in result.stdout
# assert "hello world" in stdout.split("\n")[-1]

@patch("sgpt.client.OpenAIClient.get_completion")
def test_model_option(self, mocked_get_completion):
dict_arguments = {
"prompt": "What is the capital of the Czech Republic?",
"--model": "gpt-4",
}
result = runner.invoke(app, self.get_arguments(**dict_arguments))
mocked_get_completion.assert_called_once_with(
ANY, "gpt-4", 0.1, 1.0, caching=False
)
assert result.exit_code == 0

0 comments on commit 2b7067f

Please sign in to comment.