A powerful Python tool for performing technical searches using the Perplexity API, optimized for retrieving precise facts, code examples, and numerical data.
Overview • Features • Installation • Usage • Configuration • Requirements • Error Handling • Contributing • FAQ • License
Perplexity Search is a command-line tool and Python library that leverages the power of Perplexity AI to provide accurate, technical search results. It's designed for developers, researchers, and technical users who need quick access to precise information, code examples, and technical documentation.
-
Interactive Mode: Engage in a conversational interface where you can ask multiple queries in sequence.
-
Perform searches using different LLaMA models (small, large, huge)
-
Configurable API key support via environment variable or direct input
-
Customizable search queries with temperature and other parameters
-
Command-line interface for easy usage
-
Focused on retrieving technical information with code examples
-
Returns responses formatted in markdown
-
Optimized for factual and numerical data
pip install plexsearch
from perplexity_search import perform_search
# Using environment variable for API key
result = perform_search("What is Python's time complexity for list operations?")
# Or passing API key directly
result = perform_search("What are the differences between Python 3.11 and 3.12?", api_key="your-api-key")
# Specify a different model
result = perform_search("Show me example code for Python async/await", model="llama-3.1-sonar-huge-128k-online")
To enter interactive mode, simply run the command without any query arguments:
plexsearch
In interactive mode, you can type your queries one by one. Type exit
to quit the interactive session.
# Basic search
plexsearch "What is Python's time complexity for list operations?"
# Specify model
plexsearch --model llama-3.1-sonar-huge-128k-online "What are the differences between Python 3.11 and 3.12?"
# Use specific API key
plexsearch --api-key your-api-key "Show me example code for Python async/await"
# Multi-word queries work naturally
plexsearch tell me about frogs
# Disable streaming output
plexsearch --no-stream "tell me about frogs"
# Show numbered citations at the bottom of the response
plexsearch --citations "tell me about Python's GIL"
Note: Streaming is automatically disabled when running inside Aider to prevent
filling up the context window.
Set your Perplexity API key in one of these ways:
- Environment variable:
export PERPLEXITY_API_KEY=your-api-key # Or add to your ~/.bashrc or ~/.zshrc for persistence echo 'export PERPLEXITY_API_KEY=your-api-key' >> ~/.bashrc
- Pass directly in code or CLI:
--api-key your-api-key
The following models can be specified using the --model
parameter:
llama-3.1-sonar-small-128k-online
(Faster, lighter model)llama-3.1-sonar-large-128k-online
(Default, balanced model)llama-3.1-sonar-huge-128k-online
(Most capable model)
- Python 3.x
- requests library
- Perplexity API key (obtain from Perplexity API)
The tool includes error handling for:
- Missing API keys
- Invalid API responses
- Network issues
- Invalid model selections
We welcome contributions! Please see our CONTRIBUTING.md for more details on how to contribute to this project. Check our CHANGELOG.md for recent updates and changes.
Q: How do I get an API key for Perplexity?
A: You can obtain an API key by signing up on the Perplexity API website.
Q: What models are available for search?
A: The available models are small
, large
, and huge
.
MIT License - see the LICENSE file for details