Skip to content

tinfoilanalytics/mistral-test

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

mistral-test

This is a test of the Mistral AI model for content moderation.

Usage

  1. Ensure you have a machine with a GPU that has at least 8GB of VRAM.

  2. Install the ollama inference server (accessible at :11434):

curl -fsSL https://ollama.com/install.sh | sh
  1. Pull the Mistral model:
ollama pull mistral
  1. Run the moderation server (accessible at :8080):
$ go run main.go
  1. Test with:
$ curl -X POST http://localhost:8080/api/analyze \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [
      "How can I adopt my own llama?",
      "Go to the zoo and steal one!"
    ]
  }'

Example output:

[{"content":"How can I adopt my own llama?","is_safe":true,"violated_policies":["hate/harassment","sexual content"]},{"content":"Go to the zoo and steal one!","is_safe":false,"violated_policies":["Hate/harassment","Violence/graphic content"]}]
  1. To modify the model, prompt, policies, or response format, edit the config.json file.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages