|
| 1 | +## Howto - a humble command-line assistant |
| 2 | + |
| 3 | +Howto helps you solve command-line tasks with AI. Describe the task, and `howto` will suggest a solution: |
| 4 | + |
| 5 | +```text |
| 6 | +$ howto curl example.org but print only the headers |
| 7 | +curl -I example.org |
| 8 | +
|
| 9 | +The `curl` command is used to transfer data from or to a server. |
| 10 | +The `-I` option tells `curl` to fetch the HTTP headers only, without the body |
| 11 | +content. |
| 12 | +``` |
| 13 | + |
| 14 | +Howto works with any OpenAI-compatible provider and local Ollama models (coming soon). It's a simple tool that doesn't interfere with your terminal. Not an "intelligent terminal" or anything. You ask, and howto answers. That's the deal. |
| 15 | + |
| 16 | +```text |
| 17 | +Usage: howto [-h] [-v] [-run] [question] |
| 18 | +
|
| 19 | +A humble command-line assistant. |
| 20 | +
|
| 21 | +Options: |
| 22 | + -h, --help Show this help message and exit |
| 23 | + -v, --version Show version information and exit |
| 24 | + -run Run the last suggested command |
| 25 | + question Describe the task to get a command suggestion |
| 26 | + Use '+' to ask a follow up question |
| 27 | +``` |
| 28 | + |
| 29 | +There are some additional features you may find useful. See the Usage section for details. |
| 30 | + |
| 31 | +## Installation |
| 32 | + |
| 33 | +### Go install |
| 34 | + |
| 35 | +This method is preferred if you have Go installed: |
| 36 | + |
| 37 | +```text |
| 38 | +go install github.com/nalgeon/howto@latest |
| 39 | +``` |
| 40 | + |
| 41 | +### Manual |
| 42 | + |
| 43 | +`howto` is a binary executable file (`howto.exe` on Windows, `howto` on Linux/macOS). Download it from the link below, unpack and put somewhere in your `PATH` ([what's that?](https://gist.github.com/nex3/c395b2f8fd4b02068be37c961301caa7)), so you can run it from anyhwere on your computer. |
| 44 | + |
| 45 | +[**Download**](https://github.com/nalgeon/howto/releases/latest) |
| 46 | + |
| 47 | +**Note for macOS users**. macOS disables unsigned binaries and prevents the `howto` from running. To resolve this issue, remove the build from quarantine by running the following command in Terminal (replace `/path/to/folder` with an actual path to the folder containing the `howto` binary): |
| 48 | + |
| 49 | +```text |
| 50 | +xattr -d com.apple.quarantine /path/to/folder/howto |
| 51 | +``` |
| 52 | + |
| 53 | +## Configuration |
| 54 | + |
| 55 | +Howto is configured using environment variables. It can use cloud AIs or local Ollama models (coming soon). |
| 56 | + |
| 57 | +Cloud AI providers charge for using their API, except for Gemini, which offers a free plan but may use your data in their products. Ollama is free without conditions but uses your machine's CPU or GPU resources. |
| 58 | + |
| 59 | +Here's how to set up an AI provider: |
| 60 | + |
| 61 | +### OpenAI |
| 62 | + |
| 63 | +1. Get an API key from the [OpenAI Settings](https://platform.openai.com/account/api-keys). |
| 64 | +2. Save the key to the `HOWTO_AI_TOKEN` environment variable. |
| 65 | +3. Optionally set the `HOWTO_AI_MODEL` environment variable to the model name you want to use (default is `gpt-4o`). |
| 66 | + |
| 67 | +### OpenAI-compatible provider |
| 68 | + |
| 69 | +Anything like [OpenRouter](https://openrouter.ai/docs/), [Nebius](https://docs.nebius.com/studio/inference/api) or [Gemini](https://ai.google.dev/gemini-api/docs/openai): |
| 70 | + |
| 71 | +1. Obtain an API endpoint from the documentation and save it to the `HOWTO_AI_URL` environment variable. Here are the endpoints for common providers: |
| 72 | + |
| 73 | +- OpenRouter: `https://openrouter.ai/api/v1/chat/completions` |
| 74 | +- Nebius: `https://api.studio.nebius.ai/v1/chat/completions` |
| 75 | +- Gemini: `https://generativelanguage.googleapis.com/v1beta/openai/chat/completions` |
| 76 | + |
| 77 | +2. Get an API key from the provider and save it to the `HOWTO_AI_TOKEN` environment variable. |
| 78 | +3. Set the `HOWTO_AI_MODEL` environment variable to the model name you want to use. |
| 79 | + |
| 80 | +### Ollama (coming soon) |
| 81 | + |
| 82 | +Ollama runs AI models locally on your machine. Here's how to set it up: |
| 83 | + |
| 84 | +1. Download and install [Ollama](https://ollama.com/) for your operating system. |
| 85 | +2. Set the [environment variables](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server) to use less memory: |
| 86 | + |
| 87 | +```text |
| 88 | +OLLAMA_KEEP_ALIVE = 1h |
| 89 | +OLLAMA_FLASH_ATTENTION = 1 |
| 90 | +``` |
| 91 | + |
| 92 | +3. Restart Ollama. |
| 93 | +4. Download the AI model Gemma 2 (or another model of your choice): |
| 94 | + |
| 95 | +```text |
| 96 | +ollama pull gemma2:2b |
| 97 | +``` |
| 98 | + |
| 99 | +5. Set the `HOWTO_AI_VENDOR` environment variable to `ollama`. |
| 100 | +6. Set the `HOWTO_AI_MODEL` environment variable to `gemma2:2b` (or another model of your choice). |
| 101 | + |
| 102 | +Gemma 2 is a lightweight model that uses about 1GB of memory and works quickly without a GPU. |
| 103 | + |
| 104 | +### Other settings |
| 105 | + |
| 106 | +- `HOWTO_AI_TEMPERATURE`. Sampling temperature to use (between 0 and 2). Higher values make the output more random, while lower values make it more focused and predictable. Default: 0 |
| 107 | +- `HOWTO_AI_TIMEOUT`. Timeout for AI API requests in seconds. Default: 30 |
| 108 | +- `HOWTO_PROMPT`. The system prompt for the AI. |
| 109 | + |
| 110 | +To see the system prompt and other settings, run `howto -v`. |
| 111 | + |
| 112 | +## Usage |
| 113 | + |
| 114 | +Describe your task to `howto`, and it will provide an answer: |
| 115 | + |
| 116 | +```text |
| 117 | +$ howto curl example.org but print only the headers |
| 118 | +curl -I example.org |
| 119 | +
|
| 120 | +The `-I` option in `curl` is used to fetch the HTTP headers only, without the response body. |
| 121 | +``` |
| 122 | + |
| 123 | +### Follow-ups |
| 124 | + |
| 125 | +If you're not satisfied with an answer, refine it or ask a follow-up question by starting with `+`: |
| 126 | + |
| 127 | +```text |
| 128 | +$ howto a command that works kinda like diff but compares differently |
| 129 | +comm file1 file2 |
| 130 | +
|
| 131 | +The `comm` command compares two sorted files line by line and outputs three |
| 132 | +columns: lines unique to the first file, lines unique to the second file, and |
| 133 | +lines common to both files. |
| 134 | +
|
| 135 | +$ howto + yeah right i need only the intersection |
| 136 | +comm -12 file1 file2 |
| 137 | +
|
| 138 | +The `comm` command compares two sorted files line by line. |
| 139 | +The `-12` option suppresses the first and second columns, showing only lines |
| 140 | +common to both files (the intersection). |
| 141 | +``` |
| 142 | + |
| 143 | +If you don't use `+`, howto will forget the previous conversation and treat your question as new. |
| 144 | + |
| 145 | +### Run command |
| 146 | + |
| 147 | +When satisfied with the suggested command, run `howto -run` to execute it without manually copying and pasting: |
| 148 | + |
| 149 | +```text |
| 150 | +$ howto curl example.org but print only the headers |
| 151 | +curl -I example.org |
| 152 | +
|
| 153 | +The `curl` command is used to transfer data from or to a server. |
| 154 | +The `-I` option tells `curl` to fetch the HTTP headers only, without the body |
| 155 | +content. |
| 156 | +
|
| 157 | +$ howto -run |
| 158 | +curl -I example.org |
| 159 | +
|
| 160 | +HTTP/1.1 200 OK |
| 161 | +Content-Type: text/html |
| 162 | +ETag: "84238dfc8092e5d9c0dac8ef93371a07:1736799080.121134" |
| 163 | +Last-Modified: Mon, 13 Jan 2025 20:11:20 GMT |
| 164 | +Cache-Control: max-age=2804 |
| 165 | +Date: Sun, 09 Feb 2025 12:54:51 GMT |
| 166 | +Connection: keep-alive |
| 167 | +``` |
| 168 | + |
| 169 | +That's it! |
| 170 | + |
| 171 | +## License |
| 172 | + |
| 173 | +Created by [Anton Zhiyanov](https://antonz.org/). Released under the MIT License. |
0 commit comments