Skip to content

Commit e083610

Browse files
committed
initial commit
0 parents  commit e083610

File tree

18 files changed

+2268
-0
lines changed

18 files changed

+2268
-0
lines changed

.github/workflows/build.yml

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
name: build
2+
3+
on:
4+
push:
5+
branches: [main]
6+
paths-ignore:
7+
- README.md
8+
pull_request:
9+
branches: [main]
10+
workflow_dispatch:
11+
12+
jobs:
13+
build:
14+
name: Build and test
15+
runs-on: ubuntu-latest
16+
steps:
17+
- name: Checkout
18+
uses: actions/checkout@v4
19+
20+
- name: Setup Go
21+
uses: actions/setup-go@v5
22+
with:
23+
go-version-file: "go.mod"
24+
25+
- name: Build and Test
26+
run: |
27+
go get .
28+
go build -v .
29+
go test -v ./...

LICENSE

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
MIT License
2+
3+
Copyright (c) 2025 Anton Zhiyanov <https://github.com/nalgeon/howto>
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a copy
6+
of this software and associated documentation files (the "Software"), to deal
7+
in the Software without restriction, including without limitation the rights
8+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+
copies of the Software, and to permit persons to whom the Software is
10+
furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice shall be included in all
13+
copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21+
SOFTWARE.

Makefile

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
BUILD_TAG := $(shell git describe --tags)
2+
3+
.PHONY: build
4+
build:
5+
@go build -ldflags "-X main.version=$(BUILD_TAG)" -o howto
6+
7+
.PHONY: test
8+
test:
9+
@go test ./...
10+
11+
.PHONY: lint
12+
lint:
13+
@go vet ./...
14+
@golangci-lint run --print-issued-lines=false --out-format=colored-line-number ./...

README.md

Lines changed: 173 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,173 @@
1+
## Howto - a humble command-line assistant
2+
3+
Howto helps you solve command-line tasks with AI. Describe the task, and `howto` will suggest a solution:
4+
5+
```text
6+
$ howto curl example.org but print only the headers
7+
curl -I example.org
8+
9+
The `curl` command is used to transfer data from or to a server.
10+
The `-I` option tells `curl` to fetch the HTTP headers only, without the body
11+
content.
12+
```
13+
14+
Howto works with any OpenAI-compatible provider and local Ollama models (coming soon). It's a simple tool that doesn't interfere with your terminal. Not an "intelligent terminal" or anything. You ask, and howto answers. That's the deal.
15+
16+
```text
17+
Usage: howto [-h] [-v] [-run] [question]
18+
19+
A humble command-line assistant.
20+
21+
Options:
22+
-h, --help Show this help message and exit
23+
-v, --version Show version information and exit
24+
-run Run the last suggested command
25+
question Describe the task to get a command suggestion
26+
Use '+' to ask a follow up question
27+
```
28+
29+
There are some additional features you may find useful. See the Usage section for details.
30+
31+
## Installation
32+
33+
### Go install
34+
35+
This method is preferred if you have Go installed:
36+
37+
```text
38+
go install github.com/nalgeon/howto@latest
39+
```
40+
41+
### Manual
42+
43+
`howto` is a binary executable file (`howto.exe` on Windows, `howto` on Linux/macOS). Download it from the link below, unpack and put somewhere in your `PATH` ([what's that?](https://gist.github.com/nex3/c395b2f8fd4b02068be37c961301caa7)), so you can run it from anyhwere on your computer.
44+
45+
[**Download**](https://github.com/nalgeon/howto/releases/latest)
46+
47+
**Note for macOS users**. macOS disables unsigned binaries and prevents the `howto` from running. To resolve this issue, remove the build from quarantine by running the following command in Terminal (replace `/path/to/folder` with an actual path to the folder containing the `howto` binary):
48+
49+
```text
50+
xattr -d com.apple.quarantine /path/to/folder/howto
51+
```
52+
53+
## Configuration
54+
55+
Howto is configured using environment variables. It can use cloud AIs or local Ollama models (coming soon).
56+
57+
Cloud AI providers charge for using their API, except for Gemini, which offers a free plan but may use your data in their products. Ollama is free without conditions but uses your machine's CPU or GPU resources.
58+
59+
Here's how to set up an AI provider:
60+
61+
### OpenAI
62+
63+
1. Get an API key from the [OpenAI Settings](https://platform.openai.com/account/api-keys).
64+
2. Save the key to the `HOWTO_AI_TOKEN` environment variable.
65+
3. Optionally set the `HOWTO_AI_MODEL` environment variable to the model name you want to use (default is `gpt-4o`).
66+
67+
### OpenAI-compatible provider
68+
69+
Anything like [OpenRouter](https://openrouter.ai/docs/), [Nebius](https://docs.nebius.com/studio/inference/api) or [Gemini](https://ai.google.dev/gemini-api/docs/openai):
70+
71+
1. Obtain an API endpoint from the documentation and save it to the `HOWTO_AI_URL` environment variable. Here are the endpoints for common providers:
72+
73+
- OpenRouter: `https://openrouter.ai/api/v1/chat/completions`
74+
- Nebius: `https://api.studio.nebius.ai/v1/chat/completions`
75+
- Gemini: `https://generativelanguage.googleapis.com/v1beta/openai/chat/completions`
76+
77+
2. Get an API key from the provider and save it to the `HOWTO_AI_TOKEN` environment variable.
78+
3. Set the `HOWTO_AI_MODEL` environment variable to the model name you want to use.
79+
80+
### Ollama (coming soon)
81+
82+
Ollama runs AI models locally on your machine. Here's how to set it up:
83+
84+
1. Download and install [Ollama](https://ollama.com/) for your operating system.
85+
2. Set the [environment variables](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server) to use less memory:
86+
87+
```text
88+
OLLAMA_KEEP_ALIVE = 1h
89+
OLLAMA_FLASH_ATTENTION = 1
90+
```
91+
92+
3. Restart Ollama.
93+
4. Download the AI model Gemma 2 (or another model of your choice):
94+
95+
```text
96+
ollama pull gemma2:2b
97+
```
98+
99+
5. Set the `HOWTO_AI_VENDOR` environment variable to `ollama`.
100+
6. Set the `HOWTO_AI_MODEL` environment variable to `gemma2:2b` (or another model of your choice).
101+
102+
Gemma 2 is a lightweight model that uses about 1GB of memory and works quickly without a GPU.
103+
104+
### Other settings
105+
106+
- `HOWTO_AI_TEMPERATURE`. Sampling temperature to use (between 0 and 2). Higher values make the output more random, while lower values make it more focused and predictable. Default: 0
107+
- `HOWTO_AI_TIMEOUT`. Timeout for AI API requests in seconds. Default: 30
108+
- `HOWTO_PROMPT`. The system prompt for the AI.
109+
110+
To see the system prompt and other settings, run `howto -v`.
111+
112+
## Usage
113+
114+
Describe your task to `howto`, and it will provide an answer:
115+
116+
```text
117+
$ howto curl example.org but print only the headers
118+
curl -I example.org
119+
120+
The `-I` option in `curl` is used to fetch the HTTP headers only, without the response body.
121+
```
122+
123+
### Follow-ups
124+
125+
If you're not satisfied with an answer, refine it or ask a follow-up question by starting with `+`:
126+
127+
```text
128+
$ howto a command that works kinda like diff but compares differently
129+
comm file1 file2
130+
131+
The `comm` command compares two sorted files line by line and outputs three
132+
columns: lines unique to the first file, lines unique to the second file, and
133+
lines common to both files.
134+
135+
$ howto + yeah right i need only the intersection
136+
comm -12 file1 file2
137+
138+
The `comm` command compares two sorted files line by line.
139+
The `-12` option suppresses the first and second columns, showing only lines
140+
common to both files (the intersection).
141+
```
142+
143+
If you don't use `+`, howto will forget the previous conversation and treat your question as new.
144+
145+
### Run command
146+
147+
When satisfied with the suggested command, run `howto -run` to execute it without manually copying and pasting:
148+
149+
```text
150+
$ howto curl example.org but print only the headers
151+
curl -I example.org
152+
153+
The `curl` command is used to transfer data from or to a server.
154+
The `-I` option tells `curl` to fetch the HTTP headers only, without the body
155+
content.
156+
157+
$ howto -run
158+
curl -I example.org
159+
160+
HTTP/1.1 200 OK
161+
Content-Type: text/html
162+
ETag: "84238dfc8092e5d9c0dac8ef93371a07:1736799080.121134"
163+
Last-Modified: Mon, 13 Jan 2025 20:11:20 GMT
164+
Cache-Control: max-age=2804
165+
Date: Sun, 09 Feb 2025 12:54:51 GMT
166+
Connection: keep-alive
167+
```
168+
169+
That's it!
170+
171+
## License
172+
173+
Created by [Anton Zhiyanov](https://antonz.org/). Released under the MIT License.

ai/ai.go

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
// Package ai is responsible for interacting with the cloud AI
2+
// or local Ollama models. It provides a simple vender-agnostic
3+
// interface to ask questions and get answers.
4+
package ai
5+
6+
import (
7+
"fmt"
8+
"net/http"
9+
"os"
10+
)
11+
12+
// AskFunc is a function that sends a question to the AI.
13+
type AskFunc func(history []string) (string, error)
14+
15+
// Ask sends a question to the AI and returns the answer.
16+
// It uses the configuration prompt and conversation history
17+
// to create a message for the AI.
18+
// Ask is the main interface of the ai package.
19+
var Ask AskFunc
20+
21+
// Conf describes the AI configuration.
22+
var Conf Config
23+
24+
// HTTP client used to make requests to the AI.
25+
var httpClient *http.Client
26+
27+
// message represents a single message in the conversation.
28+
type message struct {
29+
Role string `json:"role"`
30+
Content string `json:"content"`
31+
}
32+
33+
func init() {
34+
// Load the configuration.
35+
config, err := loadConfig()
36+
if err != nil {
37+
fmt.Println(err)
38+
os.Exit(1)
39+
}
40+
Conf = config
41+
42+
// Set the Ask function based on the vendor.
43+
switch config.Vendor {
44+
case "openai":
45+
Ask = openai{config}.Ask
46+
default:
47+
fmt.Println("Unknown AI vendor:", config.Vendor)
48+
os.Exit(1)
49+
}
50+
51+
// Create an HTTP client with a timeout.
52+
httpClient = &http.Client{
53+
Timeout: config.Timeout,
54+
}
55+
}
56+
57+
// buildMessages constructs a list of messages from the prompt
58+
// and the conversation history (a sequence of user and assistant messages).
59+
func buildMessages(prompt string, history []string) []message {
60+
var messages []message
61+
messages = append(messages, message{Role: "system", Content: prompt})
62+
for i := 0; i < len(history); i += 2 {
63+
messages = append(messages, message{Role: "user", Content: history[i]})
64+
if i+1 < len(history) {
65+
messages = append(messages, message{Role: "assistant", Content: history[i+1]})
66+
}
67+
}
68+
return messages
69+
}

ai/ai_test.go

Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
package ai
2+
3+
import (
4+
"reflect"
5+
"testing"
6+
)
7+
8+
func Test_buildMessages(t *testing.T) {
9+
prompt := "You are a helpful assistant."
10+
11+
tests := []struct {
12+
name string
13+
history []string
14+
want []message
15+
}{
16+
{
17+
name: "No history",
18+
history: []string{},
19+
want: []message{
20+
{Role: "system", Content: prompt},
21+
},
22+
},
23+
{
24+
name: "Single user message",
25+
history: []string{"Hello"},
26+
want: []message{
27+
{Role: "system", Content: prompt},
28+
{Role: "user", Content: "Hello"},
29+
},
30+
},
31+
{
32+
name: "User and assistant message",
33+
history: []string{"Hello", "Hi there!"},
34+
want: []message{
35+
{Role: "system", Content: prompt},
36+
{Role: "user", Content: "Hello"},
37+
{Role: "assistant", Content: "Hi there!"},
38+
},
39+
},
40+
{
41+
name: "Multiple user and assistant messages",
42+
history: []string{"Hello", "Hi there!", "How are you?", "I'm fine, thank you."},
43+
want: []message{
44+
{Role: "system", Content: prompt},
45+
{Role: "user", Content: "Hello"},
46+
{Role: "assistant", Content: "Hi there!"},
47+
{Role: "user", Content: "How are you?"},
48+
{Role: "assistant", Content: "I'm fine, thank you."},
49+
},
50+
},
51+
{
52+
name: "Odd number of messages",
53+
history: []string{"Hello", "Hi there!", "How are you?"},
54+
want: []message{
55+
{Role: "system", Content: prompt},
56+
{Role: "user", Content: "Hello"},
57+
{Role: "assistant", Content: "Hi there!"},
58+
{Role: "user", Content: "How are you?"},
59+
},
60+
},
61+
}
62+
63+
for _, tt := range tests {
64+
t.Run(tt.name, func(t *testing.T) {
65+
got := buildMessages(prompt, tt.history)
66+
if !reflect.DeepEqual(got, tt.want) {
67+
t.Errorf("Expected: %v, got: %v", tt.want, got)
68+
}
69+
})
70+
}
71+
}

0 commit comments

Comments
 (0)