Skip to content

Commit 73354d6

Browse files
authored
Merge pull request #4 from stacklok/makefile-etc
Makefile etc
2 parents a934fc9 + 6609fe2 commit 73354d6

File tree

10 files changed

+234
-206
lines changed

10 files changed

+234
-206
lines changed

LICENSE

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -186,7 +186,7 @@
186186
same "printed page" as the copyright notice for easier
187187
identification within third-party archives.
188188

189-
Copyright 2025 Stacklok, Inc.
189+
Copyright 2023 Stacklok, Inc.
190190

191191
Licensed under the Apache License, Version 2.0 (the "License");
192192
you may not use this file except in compliance with the License.

Makefile

Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
.PHONY: all setup test lint format check clean
2+
3+
# Default target
4+
all: setup lint test
5+
6+
# Setup development environment
7+
setup:
8+
python -m pip install --upgrade pip
9+
pip install -e ".[dev]"
10+
11+
# Run tests
12+
test:
13+
pytest tests/ -v
14+
15+
# Run all linting and type checking
16+
lint: format-check lint-check type-check
17+
18+
# Format code
19+
format:
20+
black .
21+
isort .
22+
23+
# Check formatting
24+
format-check:
25+
black --check .
26+
isort --check .
27+
28+
# Run linting
29+
lint-check:
30+
ruff check .
31+
32+
# Run type checking
33+
type-check:
34+
mypy src/
35+
36+
# Clean up
37+
clean:
38+
rm -rf build/
39+
rm -rf dist/
40+
rm -rf *.egg-info
41+
rm -rf .pytest_cache
42+
rm -rf .mypy_cache
43+
rm -rf .ruff_cache
44+
find . -type d -name __pycache__ -exec rm -rf {} +
45+
find . -type f -name "*.pyc" -delete
46+
47+
# Build package
48+
build: clean
49+
python -m build
50+
51+
# Install package locally
52+
install:
53+
pip install -e .
54+
55+
# Install development dependencies
56+
install-dev:
57+
pip install -e ".[dev]"
58+
59+
# Help target
60+
help:
61+
@echo "Available targets:"
62+
@echo " all : Run setup, lint, and test"
63+
@echo " setup : Set up development environment"
64+
@echo " test : Run tests"
65+
@echo " lint : Run all code quality checks"
66+
@echo " format : Format code with black and isort"
67+
@echo " format-check : Check code formatting"
68+
@echo " lint-check : Run ruff linter"
69+
@echo " type-check : Run mypy type checker"
70+
@echo " clean : Clean up build artifacts"
71+
@echo " build : Build package"
72+
@echo " install : Install package locally"
73+
@echo " install-dev : Install package with development dependencies"

README.md

Lines changed: 34 additions & 123 deletions
Original file line numberDiff line numberDiff line change
@@ -1,35 +1,16 @@
11
# Mock LLM Server
22

3-
[![CI](https://github.com/lukehinds/mockllm/actions/workflows/ci.yml/badge.svg)](https://github.com/lukehinds/mockllm/actions/workflows/ci.yml)
3+
[![CI](https://github.com/stacklok/mockllm/actions/workflows/ci.yml/badge.svg)](https://github.com/stacklok/mockllm/actions/workflows/ci.yml)
44
[![PyPI version](https://badge.fury.io/py/mockllm.svg)](https://badge.fury.io/py/mockllm)
5-
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
5+
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
6+
67

78
A FastAPI-based mock LLM server that mimics OpenAI and Anthropic API formats. Instead of calling actual language models,
89
it uses predefined responses from a YAML configuration file.
910

1011
This is made for when you want a deterministic response for testing or development purposes.
1112

12-
Check out the [CodeGate](https://github.com/stacklok/codegate) when you're done here!
13-
14-
## Project Structure
15-
16-
```
17-
mockllm/
18-
├── src/
19-
│ └── mockllm/
20-
│ ├── __init__.py
21-
│ ├── config.py # Response configuration handling
22-
│ ├── models.py # Pydantic models for API
23-
│ └── server.py # FastAPI server implementation
24-
├── tests/
25-
│ └── test_server.py # Test suite
26-
├── example.responses.yml # Example response configuration
27-
├── LICENSE # MIT License
28-
├── MANIFEST.in # Package manifest
29-
├── README.md # This file
30-
├── pyproject.toml # Project configuration
31-
└── requirements.txt # Dependencies
32-
```
13+
Check out the [CodeGate](https://github.com/stacklok/codegate) project when you're done here!
3314

3415
## Features
3516

@@ -53,7 +34,7 @@ pip install mockllm
5334

5435
1. Clone the repository:
5536
```bash
56-
git clone https://github.com/lukehinds/mockllm.git
37+
git clone https://github.com/stacklok/mockllm.git
5738
cd mockllm
5839
```
5940

@@ -168,114 +149,49 @@ defaults:
168149
169150
The server automatically detects changes to `responses.yml` and reloads the configuration without requiring a restart.
170151

171-
## API Format
172-
173-
### OpenAI Format
174-
175-
#### Request Format
176-
177-
```json
178-
{
179-
"model": "mock-llm",
180-
"messages": [
181-
{"role": "user", "content": "what colour is the sky?"}
182-
],
183-
"temperature": 0.7,
184-
"max_tokens": 150,
185-
"stream": false
186-
}
187-
```
188-
189-
#### Response Format
190-
191-
Regular response:
192-
```json
193-
{
194-
"id": "mock-123",
195-
"object": "chat.completion",
196-
"created": 1700000000,
197-
"model": "mock-llm",
198-
"choices": [
199-
{
200-
"message": {
201-
"role": "assistant",
202-
"content": "The sky is blue during a clear day due to a phenomenon called Rayleigh scattering."
203-
},
204-
"finish_reason": "stop"
205-
}
206-
],
207-
"usage": {
208-
"prompt_tokens": 10,
209-
"completion_tokens": 5,
210-
"total_tokens": 15
211-
}
212-
}
213-
```
214-
215-
Streaming response (Server-Sent Events format):
216-
```
217-
data: {"id":"mock-123","object":"chat.completion.chunk","created":1700000000,"model":"mock-llm","choices":[{"delta":{"role":"assistant"},"index":0}]}
152+
## Development
218153

219-
data: {"id":"mock-124","object":"chat.completion.chunk","created":1700000000,"model":"mock-llm","choices":[{"delta":{"content":"T"},"index":0}]}
154+
The project includes a Makefile to help with common development tasks:
220155

221-
data: {"id":"mock-125","object":"chat.completion.chunk","created":1700000000,"model":"mock-llm","choices":[{"delta":{"content":"h"},"index":0}]}
156+
```bash
157+
# Set up development environment
158+
make setup
222159
223-
... (character by character)
160+
# Run all checks (setup, lint, test)
161+
make all
224162
225-
data: {"id":"mock-999","object":"chat.completion.chunk","created":1700000000,"model":"mock-llm","choices":[{"delta":{},"index":0,"finish_reason":"stop"}]}
163+
# Run tests
164+
make test
226165
227-
data: [DONE]
228-
```
166+
# Format code
167+
make format
229168
230-
### Anthropic Format
169+
# Run all linting and type checking
170+
make lint
231171
232-
#### Request Format
233-
234-
```json
235-
{
236-
"model": "claude-3-sonnet-20240229",
237-
"messages": [
238-
{"role": "user", "content": "what colour is the sky?"}
239-
],
240-
"max_tokens": 1024,
241-
"stream": false
242-
}
243-
```
172+
# Clean up build artifacts
173+
make clean
244174
245-
#### Response Format
246-
247-
Regular response:
248-
```json
249-
{
250-
"id": "mock-123",
251-
"type": "message",
252-
"role": "assistant",
253-
"model": "claude-3-sonnet-20240229",
254-
"content": [
255-
{
256-
"type": "text",
257-
"text": "The sky is blue during a clear day due to a phenomenon called Rayleigh scattering."
258-
}
259-
],
260-
"usage": {
261-
"input_tokens": 10,
262-
"output_tokens": 5,
263-
"total_tokens": 15
264-
}
265-
}
175+
# See all available commands
176+
make help
266177
```
267178

268-
Streaming response (Server-Sent Events format):
269-
```
270-
data: {"type":"message_delta","id":"mock-123","delta":{"type":"content_block_delta","index":0,"delta":{"text":"T"}}}
179+
### Development Commands
271180

272-
data: {"type":"message_delta","id":"mock-123","delta":{"type":"content_block_delta","index":0,"delta":{"text":"h"}}}
181+
- `make setup`: Install all development dependencies
182+
- `make test`: Run the test suite
183+
- `make format`: Format code with black and isort
184+
- `make lint`: Run all code quality checks (format, lint, type)
185+
- `make build`: Build the package
186+
- `make clean`: Remove build artifacts and cache files
187+
- `make install-dev`: Install package with development dependencies
273188

274-
... (character by character)
189+
For more details on available commands, run `make help`.
275190

276-
data: [DONE]
277-
```
191+
## Contributing
278192

193+
Contributions are welcome! Please feel free to submit a Pull Request.
194+
=======
279195
## Development
280196

281197
### Running Tests
@@ -301,8 +217,6 @@ ruff check .
301217

302218
## Error Handling
303219

304-
The server includes comprehensive error handling:
305-
306220
- Invalid requests return 400 status codes with descriptive messages
307221
- Server errors return 500 status codes with error details
308222
- All errors are logged using JSON format
@@ -319,6 +233,3 @@ The server uses JSON-formatted logging for:
319233

320234
Contributions are welcome! Please feel free to submit a Pull Request.
321235

322-
## License
323-
324-
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

main.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,4 @@
33
from src.mockllm.server import app
44

55
if __name__ == "__main__":
6-
uvicorn.run(app, host="0.0.0.0", port=8000, reload=True)
6+
uvicorn.run(app, host="0.0.0.0", port=8000, reload=True)

pyproject.toml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,10 +8,10 @@ dynamic = ["version"]
88
description = "A mock server that mimics OpenAI and Anthropic API formats for testing"
99
readme = "README.md"
1010
requires-python = ">=3.8"
11-
license = {text = "Apache License (2.0)"}
11+
license = {text = "Apache-2.0"}
1212
keywords = ["mock", "llm", "openai", "anthropic", "testing"]
1313
authors = [
14-
{name = "Luke Hinds", email = "lhinds@redhat.com"}
14+
{name = "Luke Hinds", email = "luke@stacklok.com"}
1515
]
1616
classifiers = [
1717
"Development Status :: 4 - Beta",
@@ -77,3 +77,4 @@ line-length = 88
7777
target-version = "py38"
7878
select = ["E", "F", "B", "I"]
7979
ignore = []
80+

src/mockllm/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@
22
Mock LLM Server - You will do what I tell you!
33
"""
44

5-
__version__ = "0.1.0"
5+
__version__ = "0.1.0"

src/mockllm/_version.py

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
# file generated by setuptools_scm
2+
# don't change, don't track in version control
3+
TYPE_CHECKING = False
4+
if TYPE_CHECKING:
5+
from typing import Tuple, Union
6+
7+
VERSION_TUPLE = Tuple[Union[int, str], ...]
8+
else:
9+
VERSION_TUPLE = object
10+
11+
version: str
12+
__version__: str
13+
__version_tuple__: VERSION_TUPLE
14+
version_tuple: VERSION_TUPLE
15+
16+
__version__ = version = "0.1.dev13+gb4dbfaf"
17+
__version_tuple__ = version_tuple = (0, 1, "dev13", "gb4dbfaf")

src/mockllm/config.py

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@
1111
logging.basicConfig(level=logging.INFO, handlers=[log_handler])
1212
logger = logging.getLogger(__name__)
1313

14+
1415
class ResponseConfig:
1516
"""Handles loading and managing response configurations from YAML."""
1617

@@ -26,34 +27,37 @@ def load_responses(self) -> None:
2627
try:
2728
current_mtime = Path(self.yaml_path).stat().st_mtime
2829
if current_mtime > self.last_modified:
29-
with open(self.yaml_path, 'r') as f:
30+
with open(self.yaml_path, "r") as f:
3031
data = yaml.safe_load(f)
31-
self.responses = data.get('responses', {})
32-
self.default_response = data.get('defaults', {}).get(
33-
'unknown_response', self.default_response
32+
self.responses = data.get("responses", {})
33+
self.default_response = data.get("defaults", {}).get(
34+
"unknown_response", self.default_response
3435
)
3536
self.last_modified = current_mtime
36-
logger.info(f"Loaded {len(self.responses)} responses from {self.yaml_path}")
37+
logger.info(
38+
f"Loaded {len(self.responses)} responses from {self.yaml_path}"
39+
)
3740
except Exception as e:
3841
logger.error(f"Error loading responses: {str(e)}")
3942
raise HTTPException(
40-
status_code=500,
41-
detail="Failed to load response configuration"
43+
status_code=500, detail="Failed to load response configuration"
4244
)
4345

4446
def get_response(self, prompt: str) -> str:
4547
"""Get response for a given prompt."""
4648
self.load_responses() # Check for updates
4749
return self.responses.get(prompt.lower().strip(), self.default_response)
4850

49-
def get_streaming_response(self, prompt: str, chunk_size: Optional[int] = None) -> str:
51+
def get_streaming_response(
52+
self, prompt: str, chunk_size: Optional[int] = None
53+
) -> str:
5054
"""Generator that yields response content character by character or in chunks."""
5155
response = self.get_response(prompt)
5256
if chunk_size:
5357
# Yield response in chunks
5458
for i in range(0, len(response), chunk_size):
55-
yield response[i:i + chunk_size]
59+
yield response[i : i + chunk_size]
5660
else:
5761
# Yield response character by character
5862
for char in response:
59-
yield char
63+
yield char

0 commit comments

Comments
 (0)