Skip to content

Commit

Permalink
Added option to run sgpt with LocalAI (#307)
Browse files Browse the repository at this point in the history
  • Loading branch information
TheR1D authored Jul 19, 2023
1 parent 4aed53b commit 1c58566
Show file tree
Hide file tree
Showing 5 changed files with 15 additions and 39 deletions.
7 changes: 5 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# ShellGPT
A command-line productivity tool powered by OpenAI's GPT models. As developers, we can leverage AI capabilities to generate shell commands, code snippets, comments, and documentation, among other things. Forget about cheat sheets and notes, with this tool you can get accurate answers right in your terminal, and you'll probably find yourself reducing your daily Google searches, saving you valuable time and effort. ShellGPT is cross-platform compatible and supports all major operating systems, including Linux, macOS, and Windows with all major shells, such as PowerShell, CMD, Bash, Zsh, Fish, and many others.
A command-line productivity tool powered by AI large language models (LLM). As developers, we can leverage AI capabilities to generate shell commands, code snippets, comments, and documentation, among other things. Forget about cheat sheets and notes, with this tool you can get accurate answers right in your terminal, and you'll probably find yourself reducing your daily Google searches, saving you valuable time and effort. ShellGPT is cross-platform compatible and supports all major operating systems, including Linux, macOS, and Windows with all major shells, such as PowerShell, CMD, Bash, Zsh, Fish, and many others.

https://user-images.githubusercontent.com/16740832/231569156-a3a9f9d4-18b1-4fff-a6e1-6807651aa894.mp4

Expand Down Expand Up @@ -358,7 +358,7 @@ Switch `SYSTEM_ROLES` to force use [system roles](https://help.openai.com/en/art
│ prompt [PROMPT] The prompt to generate completions for. │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --model [gpt-4|gpt-4-32k|gpt-3.5|gpt-3.5-16k] OpenAI GPT model to use. [default: gpt-3.5-turbo] │
│ --model TEXT OpenAI GPT model to use. [default: gpt-3.5-turbo] │
│ --temperature FLOAT RANGE [0.0<=x<=2.0] Randomness of generated output. [default: 0.1] │
│ --top-probability FLOAT RANGE [0.1<=x<=1.0] Limits highest probable tokens (words). [default: 1.0] │
│ --editor Open $EDITOR to provide a prompt. [default: no-editor] │
Expand All @@ -384,6 +384,9 @@ Switch `SYSTEM_ROLES` to force use [system roles](https://help.openai.com/en/art
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
```
## LocalAI
By default, ShellGPT leverages OpenAI's large language models. However, it also provides the flexibility to use locally hosted models, which can be a cost-effective alternative. To use local models, you will need to run your own API server. You can accomplish this by using [LocalAI](https://github.com/go-skynet/LocalAI), a self-hosted, OpenAI-compatible API. Setting up LocalAI allows you to run language models on your own hardware, potentially without the need for an internet connection, depending on your usage. To set up your LocalAI, please follow this comprehensive [guide](https://github.com/TheR1D/shell_gpt/wiki/LocalAI). Remember that the performance of your local models may depend on the specifications of your hardware and the specific language model you choose to deploy.
## Docker
Run the container using the `OPENAI_API_KEY` environment variable, and a docker volume to store cache:
```shell
Expand Down
2 changes: 1 addition & 1 deletion sgpt/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from .app import main as main
from .app import entry_point as cli # noqa: F401

__version__ = "0.9.3"
__version__ = "0.9.4"
28 changes: 8 additions & 20 deletions sgpt/app.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,3 @@
"""
This module provides a simple interface for OpenAI API using Typer
as the command line interface. It supports different modes of output including
shell commands and code, and allows users to specify the desired OpenAI model
and length and other options of the output. Additionally, it supports executing
shell commands directly from the interface.
"""
# To allow users to use arrow keys in the REPL.
import readline # noqa: F401
import sys
Expand All @@ -18,12 +11,7 @@
from sgpt.handlers.default_handler import DefaultHandler
from sgpt.handlers.repl_handler import ReplHandler
from sgpt.role import DefaultRoles, SystemRole
from sgpt.utils import (
ModelOptions,
get_edited_prompt,
install_shell_integration,
run_command,
)
from sgpt.utils import get_edited_prompt, install_shell_integration, run_command


def main(
Expand All @@ -32,9 +20,9 @@ def main(
show_default=False,
help="The prompt to generate completions for.",
),
model: ModelOptions = typer.Option(
ModelOptions(cfg.get("DEFAULT_MODEL")).value,
help="OpenAI GPT model to use.",
model: str = typer.Option(
cfg.get("DEFAULT_MODEL"),
help="Large language model to use.",
),
temperature: float = typer.Option(
0.1,
Expand Down Expand Up @@ -159,7 +147,7 @@ def main(
# Will be in infinite loop here until user exits with Ctrl+C.
ReplHandler(repl, role_class).handle(
prompt,
model=model.value,
model=model,
temperature=temperature,
top_probability=top_probability,
chat_id=repl,
Expand All @@ -169,7 +157,7 @@ def main(
if chat:
full_completion = ChatHandler(chat, role_class).handle(
prompt,
model=model.value,
model=model,
temperature=temperature,
top_probability=top_probability,
chat_id=chat,
Expand All @@ -178,7 +166,7 @@ def main(
else:
full_completion = DefaultHandler(role_class).handle(
prompt,
model=model.value,
model=model,
temperature=temperature,
top_probability=top_probability,
caching=cache,
Expand All @@ -198,7 +186,7 @@ def main(
elif option == "d":
DefaultHandler(DefaultRoles.DESCRIBE_SHELL.get_role()).handle(
full_completion,
model=model.value,
model=model,
temperature=temperature,
top_probability=top_probability,
caching=cache,
Expand Down
4 changes: 1 addition & 3 deletions sgpt/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@

from click import UsageError

from .utils import ModelOptions

CONFIG_FOLDER = os.path.expanduser("~/.config")
SHELL_GPT_CONFIG_FOLDER = Path(CONFIG_FOLDER) / "shell_gpt"
SHELL_GPT_CONFIG_PATH = SHELL_GPT_CONFIG_FOLDER / ".sgptrc"
Expand All @@ -23,7 +21,7 @@
"CHAT_CACHE_LENGTH": int(os.getenv("CHAT_CACHE_LENGTH", "100")),
"CACHE_LENGTH": int(os.getenv("CHAT_CACHE_LENGTH", "100")),
"REQUEST_TIMEOUT": int(os.getenv("REQUEST_TIMEOUT", "60")),
"DEFAULT_MODEL": os.getenv("DEFAULT_MODEL", ModelOptions.GPT35TURBO.value),
"DEFAULT_MODEL": os.getenv("DEFAULT_MODEL", "gpt-3.5-turbo"),
"OPENAI_API_HOST": os.getenv("OPENAI_API_HOST", "https://api.openai.com"),
"DEFAULT_COLOR": os.getenv("DEFAULT_COLOR", "magenta"),
"ROLE_STORAGE_PATH": os.getenv("ROLE_STORAGE_PATH", str(ROLE_STORAGE_PATH)),
Expand Down
13 changes: 0 additions & 13 deletions sgpt/utils.py
Original file line number Diff line number Diff line change
@@ -1,26 +1,13 @@
import os
import platform
import shlex
from enum import Enum
from tempfile import NamedTemporaryFile
from typing import Any, Callable

import typer
from click import BadParameter


class ModelOptions(str, Enum):
"""
Model endpoint compatibility
https://platform.openai.com/docs/models/model-endpoint-compatibility
"""

GPT4 = "gpt-4"
GPT432k = "gpt-4-32k"
GPT35TURBO = "gpt-3.5-turbo"
GPT35TURBO16K = "gpt-3.5-turbo-16k"


def get_edited_prompt() -> str:
"""
Opens the user's default editor to let them
Expand Down

0 comments on commit 1c58566

Please sign in to comment.