diff --git a/README.md b/README.md
index f6bcfc7..c05d402 100644
--- a/README.md
+++ b/README.md
@@ -1,103 +1,85 @@
-# GroqCall.ai (I changed the name from FunckyCall to GroqCall)
+# GroqCall.ai - Lightning-Fast LLM Function Calls
+
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1q3is7qynCsx4s7FBznCfTMnokbKWIv1F?usp=sharing)
[![Version](https://img.shields.io/badge/version-0.0.1-blue.svg)](https://github.com/unclecode/groqcall)
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
-GroqCall is a proxy server provides function call for Groq's lightning-fast Language Processing Unit (LPU) and other AI providers. Additionally, the upcoming FuncyHub will offer a wide range of built-in functions, hosted on the cloud, making it easier to create AI assistants without the need to maintain function schemas in the codebase or or execute them through multiple calls.
-
-## Motivation 🚀
-Groq is a startup that designs highly specialized processor chips aimed specifically at running inference on large language models. They've introduced what they call the Language Processing Unit (LPU), and the speed is astounding—capable of producing 500 to 800 tokens per second or more. I've become a big fan of Groq and their community;
-
-
-I admire what they're doing. It feels like after discovering electricity, the next challenge is moving it around quickly and efficiently. Groq is doing just that for Artificial Intelligence, making it easily accessible everywhere. They've opened up their API to the cloud, but as of now, they lack a function call capability.
-
-Unable to wait for this feature, I built a proxy that enables function calls using the OpenAI interface, allowing it to be called from any library. This engineering workaround has proven to be immensely useful in my company for various projects. Here's the link to the GitHub repository where you can explore and play around with it. I've included some examples in this collaboration for you to check out.
-
-
+GroqCall is a proxy server that enables lightning-fast function calls for Groq's Language Processing Unit (LPU) and other AI providers. It simplifies the creation of AI assistants by offering a wide range of built-in functions hosted on the cloud.
-
+## Quickstart
+### Using the Pre-built Server
+To quickly start using GroqCall without running it locally, make requests to one of the following base URLs:
-## Running the Proxy Locally 🖥️
-To run this proxy locally on your own machine, follow these steps:
+- Cloud: `https://groqcall.ai/proxy/groq/v1`
+- Local: `http://localhost:8000` (if running the proxy server locally)
-1. Clone the GitHub repository:
-```git clone https://github.com/unclecode/groqcall.git```
+### Running the Proxy Locally
-2. Navigate to the project directory:
-```cd groqcall```
-
-3. Create a virtual environment:
-```python -m venv venv```
-
-4. Activate virtual environment:
-```source venv/bin/activate```
-
-5. Install the required libraries:
-```pip install -r requirements.txt```
+1. Clone the repository:
+```
+git clone https://github.com/unclecode/groqcall.git
+cd groqcall
+```
-6. Run the FastAPI server:
-```./venv/bin/uvicorn --app-dir app/ main:app --reload```
+2. Create and activate a virtual environment:
+```
+python -m venv venv
+source venv/bin/activate
+```
+3. Install dependencies:
+```
+pip install -r requirements.txt
+```
-## Using the Pre-built Server 🌐
-For your convenience, I have already set up a server that you can use temporarily. This allows you to quickly start using the proxy without having to run it locally.
+4. Run the FastAPI server:
+```
+./venv/bin/uvicorn --app-dir app/ main:app --reload
+```
-To use the pre-built server, simply make requests to the following base URL:
-```https://groqcall.ai/proxy/groq/v1```
+## Examples
+### Using GroqCall with PhiData
-## Exploring GroqCall.ai 🚀
-This README is organized into three main sections, each showcasing different aspects of GroqCall.ai:
+```python
+from phi.llm.openai.like import OpenAILike
+from phi.assistant import Assistant
+from phi.tools.duckduckgo import DuckDuckGo
-- **Sending POST Requests**: Here, I explore the functionality of sending direct POST requests to LLMs using GroqCall.ai. This section highlights the flexibility and control offered by the library when interacting with LLMs.
-- **FuncHub**: The second section introduces the concept of FuncHub, a useful feature that simplifies the process of executing functions. With FuncHub, there is no need to send the function JSON schema explicitly, as the functions are already hosted on the proxy server. This approach streamlines the workflow, allowing developers to obtain results with a single call without having to handle function call is production server.
-- **Using GroqCall with PhiData**: In this section, I demonstrate how GroqCall.ai can be seamlessly integrated with other libraries such as my favorite one, the PhiData library, leveraging its built-in tools to connect to LLMs and perform external tool requests.
+my_groq = OpenAILike(
+ model="mixtral-8x7b-32768",
+ api_key="YOUR_GROQ_API_KEY",
+ base_url="https://groqcall.ai/proxy/groq/v1" # or "http://localhost:8000/proxy/groq/v1" if running locally
+)
+assistant = Assistant(
+ llm=my_groq,
+ tools=[DuckDuckGo()],
+ show_tool_calls=True,
+ markdown=True
+)
-```python
-# The following libraries are optional if you're interested in using PhiData or managing your tools on the client side.
-!pip install phidata > /dev/null
-!pip install openai > /dev/null
-!pip install duckduckgo-search > /dev/null
+assistant.print_response("What's happening in France? Summarize top stories with sources, very short and concise.", stream=False)
```
-## Sending POST request, with full functions implementation
+### Using GroqCall with Requests
+
+#### FuncHub: Schema-less Function Calls
+GroqCall introduces FuncHub, which allows you to make function calls without passing the function schema.
```python
-from duckduckgo_search import DDGS
-import requests, os
-import json
+import requests
-# Here you pass your own GROQ API key
-api_key=userdata.get("GROQ_API_KEY")
+api_key = "YOUR_GROQ_API_KEY"
header = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
-proxy_url = "https://groqcall.ai/proxy/groq/v1/chat/completions"
-
-
-def duckduckgo_search(query, max_results=None):
- """
- Use this function to search DuckDuckGo for a query.
- """
- with DDGS() as ddgs:
- return [r for r in ddgs.text(query, safesearch='off', max_results=max_results)]
-
-def duckduckgo_news(query, max_results=None):
- """
- Use this function to get the latest news from DuckDuckGo.
- """
- with DDGS() as ddgs:
- return [r for r in ddgs.news(query, safesearch='off', max_results=max_results)]
-
-function_map = {
- "duckduckgo_search": duckduckgo_search,
- "duckduckgo_news": duckduckgo_news,
-}
+
+proxy_url = "https://groqcall.ai/proxy/groq/v1/chat/completions" # or "http://localhost:8000/proxy/groq/v1/chat/completions" if running locally
request = {
"messages": [
@@ -106,8 +88,8 @@ request = {
"content": "YOU MUST FOLLOW THESE INSTRUCTIONS CAREFULLY.\n\n1. Use markdown to format your answers.\n"
},
{
- "role": "user",
- "content": "Whats happening in France? Summarize top stories with sources, very short and concise."
+ "role": "user",
+ "content": "What's happening in France? Summarize top stories with sources, very short and concise."
}
],
"model": "mixtral-8x7b-32768",
@@ -116,43 +98,13 @@ request = {
{
"type": "function",
"function": {
- "name": "duckduckgo_search",
- "description": "Use this function to search DuckDuckGo for a query.\n\nArgs:\n query(str): The query to search for.\n max_results (optional, default=5): The maximum number of results to return.\n\nReturns:\n The result from DuckDuckGo.",
- "parameters": {
- "type": "object",
- "properties": {
- "query": {
- "type": "string"
- },
- "max_results": {
- "type": [
- "number",
- "null"
- ]
- }
- }
- }
+ "name": "duckduck.search"
}
},
{
"type": "function",
"function": {
- "name": "duckduckgo_news",
- "description": "Use this function to get the latest news from DuckDuckGo.\n\nArgs:\n query(str): The query to search for.\n max_results (optional, default=5): The maximum number of results to return.\n\nReturns:\n The latest news from DuckDuckGo.",
- "parameters": {
- "type": "object",
- "properties": {
- "query": {
- "type": "string"
- },
- "max_results": {
- "type": [
- "number",
- "null"
- ]
- }
- }
- }
+ "name": "duckduck.news"
}
}
]
@@ -163,157 +115,36 @@ response = requests.post(
headers=header,
json=request
)
-if response.status_code == 200:
- res = response.json()
- message = res['choices'][0]['message']
- tools_response_messages = []
- if not message['content'] and 'tool_calls' in message:
- for tool_call in message['tool_calls']:
- tool_name = tool_call['function']['name']
- tool_args = tool_call['function']['arguments']
- tool_args = json.loads(tool_args)
- if tool_name not in function_map:
- print(f"Error: {tool_name} is not a valid function name.")
- continue
- tool_func = function_map[tool_name]
- tool_response = tool_func(**tool_args)
- tools_response_messages.append({
- "role": "tool", "content": json.dumps(tool_response)
- })
-
- if tools_response_messages:
- request['messages'] += tools_response_messages
- response = requests.post(
- proxy_url,
- headers=header,
- json=request
- )
- if response.status_code == 200:
- res = response.json()
- print(res['choices'][0]['message']['content'])
- else:
- print("Error:", response.status_code, response.text)
- else:
- print(message['content'])
-else:
- print("Error:", response.status_code, response.text)
-
-```
-
-## Schema-less Function Call 🤩
-In this method, we only need to provide the function's name, which consists of two parts, acting as a sort of namespace. The first part identifies the library or toolkit containing the functions, and the second part specifies the function's name, assuming it's already available on the proxy server. I aim to collaborate with the community to incorporate all typical functions, eliminating the need for passing a schema. Without having to handle function calls ourselves, a single request to the proxy enables it to identify and execute the functions, retrieve responses from large language models, and return the results to us. Thanks to Groq, all of this occurs in just seconds.
-
-
-```python
-from duckduckgo_search import DDGS
-import requests, os
-api_key = userdata.get("GROQ_API_KEY")
-header = {
- "Authorization": f"Bearer {api_key}",
- "Content-Type": "application/json"
-}
-
-proxy_url = "https://groqcall.ai/proxy/groq/v1/chat/completions"
-
-
-request = {
- "messages": [
- {
- "role": "system",
- "content": "YOU MUST FOLLOW THESE INSTRUCTIONS CAREFULLY.\n\n1. Use markdown to format your answers.\n",
- },
- {
- "role": "user",
- "content": "Whats happening in France? Summarize top stories with sources, very short and concise. Also please search about the histoy of france as well.",
- },
- ],
- "model": "mixtral-8x7b-32768",
- "tool_choice": "auto",
- "tools": [
- {
- "type": "function",
- "function": {
- "name": "duckduck.search",
- },
- },
- {
- "type": "function",
- "function": {
- "name": "duckduck.news",
- },
- },
- ],
-}
-
-response = requests.post(
- proxy_url,
- headers=header,
- json=request,
-)
-
-if response.status_code == 200:
- res = response.json()
- print(res["choices"][0]["message"]["content"])
-else:
- print("Error:", response.status_code, response.text)
+print(response.json()["choices"][0]["message"]["content"])
```
-## Using with PhiData
-FindData is a favorite of mine for creating AI assistants, thanks to its beautifully simplified interface, unlike the complexity seen in the LangChain library and LlamaIndex. I use it for many projects and want to give kudos to their team. It's open source, and I recommend everyone check it out. You can explore more from this link https://github.com/phidatahq/phidata.
-
-
-```python
-from google.README import userdata
-from phi.llm.openai.like import OpenAILike
-from phi.assistant import Assistant
-from phi.tools.duckduckgo import DuckDuckGo
-import os, json
-
+- If you notice, the function schema is not passed in the request. This is because GroqCall uses FuncHub to automatically detect and call the function based on the function name in the cloud, Therefore you dont't need to parse the first response, call the function, and pass again. Check "functions" folder to add your own functions. I will create more examples in the close future to explain how to add your own functions.
-my_groq = OpenAILike(
- model="mixtral-8x7b-32768",
- api_key=userdata.get("GROQ_API_KEY"),
- base_url="https://groqcall.ai/proxy/groq/v1"
- )
-assistant = Assistant(
- llm=my_groq,
- tools=[DuckDuckGo()], show_tool_calls=True, markdown=True
-)
-assistant.print_response("Whats happening in France? Summarize top stories with sources, very short and concise.", stream=False)
+#### Passing Function Schemas
+If you prefer to pass your own function schemas, refer to the [Function Schema example](https://github.com/unclecode/groqcall/blob/main/cookbook/function_call_with_schema.py) in the cookbook.
-```
+#### Rune proxy with Ollama locally
-## Contributions Welcome! 🙌
-I am excited to extend and grow this repository by adding more built-in functions and integrating additional services. If you are interested in contributing to this project and being a part of its development, I would love to collaborate with you! I plan to create a discord channel for this project, where we can discuss ideas, share knowledge, and work together to enhance the repository.
+Function call proxy can be used with Ollama. You should first install Ollama and run it locally. Then refer to the [Ollama example](https://github.com/unclecode/groqcall/blob/main/cookbook/function_call_ollama.py) in the cookbook.
-Here's how you can get involved:
+## Cookbook
-1. Fork the repository and create your own branch.
-2. Implement new functions, integrate additional services, or make improvements to the existing codebase.
-3. Test your changes to ensure they work as expected.
-4. Submit a pull request describing the changes you have made and why they are valuable.
+Explore the [Cookbook](https://github.com/unclecode/groqcall/tree/main/cookbook) for more examples and use cases of GroqCall.
-If you have any ideas, suggestions, or would like to discuss potential contributions, feel free to reach out to me. You can contact me through the following channels:
+## Motivation
-- Twitter (X): @unclecode
-- Email: unclecode@kidocode.com
+Groq is a startup that designs highly specialized processor chips aimed specifically at running inference on large language models. They've introduced what they call the Language Processing Unit (LPU), and the speed is astounding—capable of producing 500 to 800 tokens per second or more.
-### Copyright 2024 Unclecode (Hossein Tohidi)
+As an admirer of Groq and their community, I built this proxy to enable function calls using the OpenAI interface, allowing it to be called from any library. This engineering workaround has proven to be immensely useful in my company for various projects.
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
+## Contributing
- http://www.apache.org/licenses/LICENSE-2.0
+Contributions are welcome! If you have ideas, suggestions, or would like to contribute to this project, please reach out to me on Twitter (X) @unclecode or via email at unclecode@kidocode.com.
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
+Let's collaborate and make this repository even more awesome! 🚀
-I'm open to collaboration and excited to see how we can work together to enhance this project and provide value to the community. Let's connect and explore how we can help each other!
+## License
-Together, let's make this repository even more awesome! 🚀
+This project is licensed under the Apache License 2.0. See [LICENSE](https://github.com/unclecode/groqcall/blob/main/LICENSE) for more information.
\ No newline at end of file
diff --git a/README_old.md b/README_old.md
new file mode 100644
index 0000000..f6bcfc7
--- /dev/null
+++ b/README_old.md
@@ -0,0 +1,319 @@
+# GroqCall.ai (I changed the name from FunckyCall to GroqCall)
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1q3is7qynCsx4s7FBznCfTMnokbKWIv1F?usp=sharing)
+[![Version](https://img.shields.io/badge/version-0.0.1-blue.svg)](https://github.com/unclecode/groqcall)
+[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
+
+GroqCall is a proxy server provides function call for Groq's lightning-fast Language Processing Unit (LPU) and other AI providers. Additionally, the upcoming FuncyHub will offer a wide range of built-in functions, hosted on the cloud, making it easier to create AI assistants without the need to maintain function schemas in the codebase or or execute them through multiple calls.
+
+## Motivation 🚀
+Groq is a startup that designs highly specialized processor chips aimed specifically at running inference on large language models. They've introduced what they call the Language Processing Unit (LPU), and the speed is astounding—capable of producing 500 to 800 tokens per second or more. I've become a big fan of Groq and their community;
+
+
+I admire what they're doing. It feels like after discovering electricity, the next challenge is moving it around quickly and efficiently. Groq is doing just that for Artificial Intelligence, making it easily accessible everywhere. They've opened up their API to the cloud, but as of now, they lack a function call capability.
+
+Unable to wait for this feature, I built a proxy that enables function calls using the OpenAI interface, allowing it to be called from any library. This engineering workaround has proven to be immensely useful in my company for various projects. Here's the link to the GitHub repository where you can explore and play around with it. I've included some examples in this collaboration for you to check out.
+
+
+
+
+
+
+
+## Running the Proxy Locally 🖥️
+To run this proxy locally on your own machine, follow these steps:
+
+1. Clone the GitHub repository:
+```git clone https://github.com/unclecode/groqcall.git```
+
+2. Navigate to the project directory:
+```cd groqcall```
+
+3. Create a virtual environment:
+```python -m venv venv```
+
+4. Activate virtual environment:
+```source venv/bin/activate```
+
+5. Install the required libraries:
+```pip install -r requirements.txt```
+
+6. Run the FastAPI server:
+```./venv/bin/uvicorn --app-dir app/ main:app --reload```
+
+
+## Using the Pre-built Server 🌐
+For your convenience, I have already set up a server that you can use temporarily. This allows you to quickly start using the proxy without having to run it locally.
+
+To use the pre-built server, simply make requests to the following base URL:
+```https://groqcall.ai/proxy/groq/v1```
+
+
+## Exploring GroqCall.ai 🚀
+This README is organized into three main sections, each showcasing different aspects of GroqCall.ai:
+
+- **Sending POST Requests**: Here, I explore the functionality of sending direct POST requests to LLMs using GroqCall.ai. This section highlights the flexibility and control offered by the library when interacting with LLMs.
+- **FuncHub**: The second section introduces the concept of FuncHub, a useful feature that simplifies the process of executing functions. With FuncHub, there is no need to send the function JSON schema explicitly, as the functions are already hosted on the proxy server. This approach streamlines the workflow, allowing developers to obtain results with a single call without having to handle function call is production server.
+- **Using GroqCall with PhiData**: In this section, I demonstrate how GroqCall.ai can be seamlessly integrated with other libraries such as my favorite one, the PhiData library, leveraging its built-in tools to connect to LLMs and perform external tool requests.
+
+
+```python
+# The following libraries are optional if you're interested in using PhiData or managing your tools on the client side.
+!pip install phidata > /dev/null
+!pip install openai > /dev/null
+!pip install duckduckgo-search > /dev/null
+```
+
+## Sending POST request, with full functions implementation
+
+
+```python
+from duckduckgo_search import DDGS
+import requests, os
+import json
+
+# Here you pass your own GROQ API key
+api_key=userdata.get("GROQ_API_KEY")
+header = {
+ "Authorization": f"Bearer {api_key}",
+ "Content-Type": "application/json"
+}
+proxy_url = "https://groqcall.ai/proxy/groq/v1/chat/completions"
+
+
+def duckduckgo_search(query, max_results=None):
+ """
+ Use this function to search DuckDuckGo for a query.
+ """
+ with DDGS() as ddgs:
+ return [r for r in ddgs.text(query, safesearch='off', max_results=max_results)]
+
+def duckduckgo_news(query, max_results=None):
+ """
+ Use this function to get the latest news from DuckDuckGo.
+ """
+ with DDGS() as ddgs:
+ return [r for r in ddgs.news(query, safesearch='off', max_results=max_results)]
+
+function_map = {
+ "duckduckgo_search": duckduckgo_search,
+ "duckduckgo_news": duckduckgo_news,
+}
+
+request = {
+ "messages": [
+ {
+ "role": "system",
+ "content": "YOU MUST FOLLOW THESE INSTRUCTIONS CAREFULLY.\n\n1. Use markdown to format your answers.\n"
+ },
+ {
+ "role": "user",
+ "content": "Whats happening in France? Summarize top stories with sources, very short and concise."
+ }
+ ],
+ "model": "mixtral-8x7b-32768",
+ "tool_choice": "auto",
+ "tools": [
+ {
+ "type": "function",
+ "function": {
+ "name": "duckduckgo_search",
+ "description": "Use this function to search DuckDuckGo for a query.\n\nArgs:\n query(str): The query to search for.\n max_results (optional, default=5): The maximum number of results to return.\n\nReturns:\n The result from DuckDuckGo.",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "query": {
+ "type": "string"
+ },
+ "max_results": {
+ "type": [
+ "number",
+ "null"
+ ]
+ }
+ }
+ }
+ }
+ },
+ {
+ "type": "function",
+ "function": {
+ "name": "duckduckgo_news",
+ "description": "Use this function to get the latest news from DuckDuckGo.\n\nArgs:\n query(str): The query to search for.\n max_results (optional, default=5): The maximum number of results to return.\n\nReturns:\n The latest news from DuckDuckGo.",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "query": {
+ "type": "string"
+ },
+ "max_results": {
+ "type": [
+ "number",
+ "null"
+ ]
+ }
+ }
+ }
+ }
+ }
+ ]
+}
+
+response = requests.post(
+ proxy_url,
+ headers=header,
+ json=request
+)
+if response.status_code == 200:
+ res = response.json()
+ message = res['choices'][0]['message']
+ tools_response_messages = []
+ if not message['content'] and 'tool_calls' in message:
+ for tool_call in message['tool_calls']:
+ tool_name = tool_call['function']['name']
+ tool_args = tool_call['function']['arguments']
+ tool_args = json.loads(tool_args)
+ if tool_name not in function_map:
+ print(f"Error: {tool_name} is not a valid function name.")
+ continue
+ tool_func = function_map[tool_name]
+ tool_response = tool_func(**tool_args)
+ tools_response_messages.append({
+ "role": "tool", "content": json.dumps(tool_response)
+ })
+
+ if tools_response_messages:
+ request['messages'] += tools_response_messages
+ response = requests.post(
+ proxy_url,
+ headers=header,
+ json=request
+ )
+ if response.status_code == 200:
+ res = response.json()
+ print(res['choices'][0]['message']['content'])
+ else:
+ print("Error:", response.status_code, response.text)
+ else:
+ print(message['content'])
+else:
+ print("Error:", response.status_code, response.text)
+
+```
+
+## Schema-less Function Call 🤩
+In this method, we only need to provide the function's name, which consists of two parts, acting as a sort of namespace. The first part identifies the library or toolkit containing the functions, and the second part specifies the function's name, assuming it's already available on the proxy server. I aim to collaborate with the community to incorporate all typical functions, eliminating the need for passing a schema. Without having to handle function calls ourselves, a single request to the proxy enables it to identify and execute the functions, retrieve responses from large language models, and return the results to us. Thanks to Groq, all of this occurs in just seconds.
+
+
+```python
+from duckduckgo_search import DDGS
+import requests, os
+api_key = userdata.get("GROQ_API_KEY")
+header = {
+ "Authorization": f"Bearer {api_key}",
+ "Content-Type": "application/json"
+}
+
+proxy_url = "https://groqcall.ai/proxy/groq/v1/chat/completions"
+
+
+request = {
+ "messages": [
+ {
+ "role": "system",
+ "content": "YOU MUST FOLLOW THESE INSTRUCTIONS CAREFULLY.\n\n1. Use markdown to format your answers.\n",
+ },
+ {
+ "role": "user",
+ "content": "Whats happening in France? Summarize top stories with sources, very short and concise. Also please search about the histoy of france as well.",
+ },
+ ],
+ "model": "mixtral-8x7b-32768",
+ "tool_choice": "auto",
+ "tools": [
+ {
+ "type": "function",
+ "function": {
+ "name": "duckduck.search",
+ },
+ },
+ {
+ "type": "function",
+ "function": {
+ "name": "duckduck.news",
+ },
+ },
+ ],
+}
+
+response = requests.post(
+ proxy_url,
+ headers=header,
+ json=request,
+)
+
+if response.status_code == 200:
+ res = response.json()
+ print(res["choices"][0]["message"]["content"])
+else:
+ print("Error:", response.status_code, response.text)
+
+```
+
+## Using with PhiData
+FindData is a favorite of mine for creating AI assistants, thanks to its beautifully simplified interface, unlike the complexity seen in the LangChain library and LlamaIndex. I use it for many projects and want to give kudos to their team. It's open source, and I recommend everyone check it out. You can explore more from this link https://github.com/phidatahq/phidata.
+
+
+```python
+from google.README import userdata
+from phi.llm.openai.like import OpenAILike
+from phi.assistant import Assistant
+from phi.tools.duckduckgo import DuckDuckGo
+import os, json
+
+
+my_groq = OpenAILike(
+ model="mixtral-8x7b-32768",
+ api_key=userdata.get("GROQ_API_KEY"),
+ base_url="https://groqcall.ai/proxy/groq/v1"
+ )
+assistant = Assistant(
+ llm=my_groq,
+ tools=[DuckDuckGo()], show_tool_calls=True, markdown=True
+)
+assistant.print_response("Whats happening in France? Summarize top stories with sources, very short and concise.", stream=False)
+
+
+```
+
+## Contributions Welcome! 🙌
+I am excited to extend and grow this repository by adding more built-in functions and integrating additional services. If you are interested in contributing to this project and being a part of its development, I would love to collaborate with you! I plan to create a discord channel for this project, where we can discuss ideas, share knowledge, and work together to enhance the repository.
+
+Here's how you can get involved:
+
+1. Fork the repository and create your own branch.
+2. Implement new functions, integrate additional services, or make improvements to the existing codebase.
+3. Test your changes to ensure they work as expected.
+4. Submit a pull request describing the changes you have made and why they are valuable.
+
+If you have any ideas, suggestions, or would like to discuss potential contributions, feel free to reach out to me. You can contact me through the following channels:
+
+- Twitter (X): @unclecode
+- Email: unclecode@kidocode.com
+
+### Copyright 2024 Unclecode (Hossein Tohidi)
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+I'm open to collaboration and excited to see how we can work together to enhance this project and provide value to the community. Let's connect and explore how we can help each other!
+
+Together, let's make this repository even more awesome! 🚀
diff --git a/app/main.py b/app/main.py
index c5a4bca..2e67dd7 100644
--- a/app/main.py
+++ b/app/main.py
@@ -2,12 +2,14 @@
from fastapi.responses import HTMLResponse
from fastapi.templating import Jinja2Templates
from fastapi.staticfiles import StaticFiles
+from starlette.middleware.cors import CORSMiddleware
from starlette.requests import Request
from routes import proxy
from routes import examples
from utils import create_logger
import os
from dotenv import load_dotenv
+
load_dotenv()
app = FastAPI()
@@ -16,31 +18,53 @@
app.mount("/static", StaticFiles(directory="frontend/assets"), name="static")
templates = Jinja2Templates(directory="frontend/pages")
+
+origins = [
+ "*",
+]
+
+app.add_middleware(
+ CORSMiddleware,
+ allow_origins=origins,
+ allow_credentials=True,
+ allow_methods=["*"],
+ allow_headers=["*"],
+)
+
+
@app.middleware("http")
async def log_requests(request: Request, call_next):
if "/proxy" in request.url.path:
client_ip = request.client.host
- logger.info(f"Incoming request from {client_ip}: {request.method} {request.url}")
+ logger.info(
+ f"Incoming request from {client_ip}: {request.method} {request.url}"
+ )
response = await call_next(request)
# logger.info(f"Response status code: {response.status_code}")
return response
else:
return await call_next(request)
+
app.include_router(proxy.router, prefix="/proxy")
app.include_router(examples.router, prefix="/examples")
+
@app.get("/", response_class=HTMLResponse)
async def index(request: Request):
return templates.TemplateResponse("index.html", {"request": request})
+
# Add an get endpoint simple return the evrsion of the app
@app.get("/version")
async def version():
return {"version": "0.0.1"}
+
if __name__ == "__main__":
import uvicorn
- # uvicorn.run("main:app", host=os.getenv("HOST"), port=int(os.getenv('PORT')), workers=1, reload=True)
- uvicorn.run("main:app", host=os.getenv("HOST"), port=int(os.getenv('PORT')), workers=1)
+ # uvicorn.run("main:app", host=os.getenv("HOST"), port=int(os.getenv('PORT')), workers=1, reload=True)
+ uvicorn.run(
+ "main:app", host=os.getenv("HOST"), port=int(os.getenv("PORT")), workers=1
+ )
diff --git a/cookbook/function_call_ollama.py b/cookbook/function_call_ollama.py
new file mode 100644
index 0000000..3872906
--- /dev/null
+++ b/cookbook/function_call_ollama.py
@@ -0,0 +1,16 @@
+
+from phi.llm.openai.like import OpenAILike
+from phi.assistant import Assistant
+from phi.tools.duckduckgo import DuckDuckGo
+
+# Tried the proxy with Ollama and it works great, meaning we can use it with any provider. But, you never get the speed of Groq ;)
+my_ollama = OpenAILike(
+ model="gemma:7b",
+ api_key="",
+ base_url="http://localhost:11235/proxy/ollama/v1"
+ )
+ollama_assistant = Assistant(
+ llm=my_ollama,
+ tools=[DuckDuckGo()], show_tool_calls=True, markdown=True
+)
+ollama_assistant.print_response("Whats happening in France? Summarize top stories with sources, very short and concise.", stream=False)
\ No newline at end of file
diff --git a/cookbook/function_call_phidata.py b/cookbook/function_call_phidata.py
new file mode 100644
index 0000000..5e9a5f6
--- /dev/null
+++ b/cookbook/function_call_phidata.py
@@ -0,0 +1,34 @@
+
+from phi.llm.openai.like import OpenAILike
+from phi.assistant import Assistant
+from phi.tools.duckduckgo import DuckDuckGo
+import os, json
+
+
+groq = OpenAILike(
+ model="mixtral-8x7b-32768",
+ api_key=os.environ["GROQ_API_KEY"],
+ base_url="https://api.groq.com/openai/v1"
+ )
+assistant = Assistant(
+ llm=groq,
+ tools=[DuckDuckGo()], show_tool_calls=True, markdown=True
+)
+
+# If you run without a proxy, you will get a error, becuase Groq does not have a function to call
+# assistant.print_response("Whats happening in France? Summarize top stories with sources, very short and concise.", stream=False)
+
+my_groq = OpenAILike(
+ model="mixtral-8x7b-32768", # or model="gemma-7b-it",
+ api_key=os.environ["GROQ_API_KEY"],
+ base_url="https://groqcall.ai/proxy/groq/v1" # or "http://localhost:8000/proxy/groq/v1" if running locally
+ )
+assistant = Assistant(
+ llm=my_groq,
+ tools=[DuckDuckGo()], show_tool_calls=True, markdown=True
+)
+assistant.print_response("Whats happening in France? Summarize top stories with sources, very short and concise.", stream=False)
+
+
+
+
diff --git a/cookbook/function_call_with_schema.py b/cookbook/function_call_with_schema.py
new file mode 100644
index 0000000..8b92118
--- /dev/null
+++ b/cookbook/function_call_with_schema.py
@@ -0,0 +1,131 @@
+
+from duckduckgo_search import DDGS
+import requests, os
+api_key=os.environ["GROQ_API_KEY"]
+import json
+header = {
+ "Authorization": f"Bearer {api_key}",
+ "Content-Type": "application/json"
+}
+proxy_url = "https://groqcall.ai/proxy/groq/v1/chat/completions" # or "http://localhost:8000/proxy/groq/v1/chat/completions" if running locally
+
+
+def duckduckgo_search(query, max_results=None):
+ """
+ Use this function to search DuckDuckGo for a query.
+ """
+ with DDGS() as ddgs:
+ return [r for r in ddgs.text(query, safesearch='off', max_results=max_results)]
+
+def duckduckgo_news(query, max_results=None):
+ """
+ Use this function to get the latest news from DuckDuckGo.
+ """
+ with DDGS() as ddgs:
+ return [r for r in ddgs.news(query, safesearch='off', max_results=max_results)]
+
+function_map = {
+ "duckduckgo_search": duckduckgo_search,
+ "duckduckgo_news": duckduckgo_news,
+}
+
+request = {
+ "messages": [
+ {
+ "role": "system",
+ "content": "YOU MUST FOLLOW THESE INSTRUCTIONS CAREFULLY.\n\n1. Use markdown to format your answers.\n"
+ },
+ {
+ "role": "user",
+ "content": "Whats happening in France? Summarize top stories with sources, very short and concise."
+ }
+ ],
+ "model": "mixtral-8x7b-32768",
+ "tool_choice": "auto",
+ "tools": [
+ {
+ "type": "function",
+ "function": {
+ "name": "duckduckgo_search",
+ "description": "Use this function to search DuckDuckGo for a query.\n\nArgs:\n query(str): The query to search for.\n max_results (optional, default=5): The maximum number of results to return.\n\nReturns:\n The result from DuckDuckGo.",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "query": {
+ "type": "string"
+ },
+ "max_results": {
+ "type": [
+ "number",
+ "null"
+ ]
+ }
+ }
+ }
+ }
+ },
+ {
+ "type": "function",
+ "function": {
+ "name": "duckduckgo_news",
+ "description": "Use this function to get the latest news from DuckDuckGo.\n\nArgs:\n query(str): The query to search for.\n max_results (optional, default=5): The maximum number of results to return.\n\nReturns:\n The latest news from DuckDuckGo.",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "query": {
+ "type": "string"
+ },
+ "max_results": {
+ "type": [
+ "number",
+ "null"
+ ]
+ }
+ }
+ }
+ }
+ }
+ ]
+}
+
+response = requests.post(
+ proxy_url,
+ headers=header,
+ json=request
+)
+# Check if the request was successful
+if response.status_code == 200:
+ # Process the response data (if needed)
+ res = response.json()
+ message = res['choices'][0]['message']
+ tools_response_messages = []
+ if not message['content'] and 'tool_calls' in message:
+ for tool_call in message['tool_calls']:
+ tool_name = tool_call['function']['name']
+ tool_args = tool_call['function']['arguments']
+ tool_args = json.loads(tool_args)
+ if tool_name not in function_map:
+ print(f"Error: {tool_name} is not a valid function name.")
+ continue
+ tool_func = function_map[tool_name]
+ tool_response = tool_func(**tool_args)
+ tools_response_messages.append({
+ "role": "tool", "content": json.dumps(tool_response)
+ })
+
+ if tools_response_messages:
+ request['messages'] += tools_response_messages
+ response = requests.post(
+ proxy_url,
+ headers=header,
+ json=request
+ )
+ if response.status_code == 200:
+ res = response.json()
+ print(res['choices'][0]['message']['content'])
+ else:
+ print("Error:", response.status_code, response.text)
+ else:
+ print(message['content'])
+else:
+ print("Error:", response.status_code, response.text)
diff --git a/cookbook/function_call_without_schema.py b/cookbook/function_call_without_schema.py
new file mode 100644
index 0000000..39396a8
--- /dev/null
+++ b/cookbook/function_call_without_schema.py
@@ -0,0 +1,46 @@
+import requests
+
+api_key = "YOUR_GROQ_API_KEY"
+header = {
+ "Authorization": f"Bearer {api_key}",
+ "Content-Type": "application/json"
+}
+
+proxy_url = "https://groqcall.ai/proxy/groq/v1/chat/completions" # or "http://localhost:8000/proxy/groq/v1/chat/completions" if running locally
+
+request = {
+ "messages": [
+ {
+ "role": "system",
+ "content": "YOU MUST FOLLOW THESE INSTRUCTIONS CAREFULLY.\n\n1. Use markdown to format your answers.\n"
+ },
+ {
+ "role": "user",
+ "content": "What's happening in France? Summarize top stories with sources, very short and concise."
+ }
+ ],
+ "model": "mixtral-8x7b-32768",
+ "tool_choice": "auto",
+ "tools": [
+ {
+ "type": "function",
+ "function": {
+ "name": "duckduck.search"
+ }
+ },
+ {
+ "type": "function",
+ "function": {
+ "name": "duckduck.news"
+ }
+ }
+ ]
+}
+
+response = requests.post(
+ proxy_url,
+ headers=header,
+ json=request
+)
+
+print(response.json()["choices"][0]["message"]["content"])
\ No newline at end of file
diff --git a/frontend/assets/README.md b/frontend/assets/README.md
new file mode 100644
index 0000000..c05d402
--- /dev/null
+++ b/frontend/assets/README.md
@@ -0,0 +1,150 @@
+# GroqCall.ai - Lightning-Fast LLM Function Calls
+
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1q3is7qynCsx4s7FBznCfTMnokbKWIv1F?usp=sharing)
+[![Version](https://img.shields.io/badge/version-0.0.1-blue.svg)](https://github.com/unclecode/groqcall)
+[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
+
+GroqCall is a proxy server that enables lightning-fast function calls for Groq's Language Processing Unit (LPU) and other AI providers. It simplifies the creation of AI assistants by offering a wide range of built-in functions hosted on the cloud.
+
+## Quickstart
+
+### Using the Pre-built Server
+
+To quickly start using GroqCall without running it locally, make requests to one of the following base URLs:
+
+- Cloud: `https://groqcall.ai/proxy/groq/v1`
+- Local: `http://localhost:8000` (if running the proxy server locally)
+
+### Running the Proxy Locally
+
+1. Clone the repository:
+```
+git clone https://github.com/unclecode/groqcall.git
+cd groqcall
+```
+
+2. Create and activate a virtual environment:
+```
+python -m venv venv
+source venv/bin/activate
+```
+
+3. Install dependencies:
+```
+pip install -r requirements.txt
+```
+
+4. Run the FastAPI server:
+```
+./venv/bin/uvicorn --app-dir app/ main:app --reload
+```
+
+## Examples
+
+### Using GroqCall with PhiData
+
+```python
+from phi.llm.openai.like import OpenAILike
+from phi.assistant import Assistant
+from phi.tools.duckduckgo import DuckDuckGo
+
+my_groq = OpenAILike(
+ model="mixtral-8x7b-32768",
+ api_key="YOUR_GROQ_API_KEY",
+ base_url="https://groqcall.ai/proxy/groq/v1" # or "http://localhost:8000/proxy/groq/v1" if running locally
+)
+
+assistant = Assistant(
+ llm=my_groq,
+ tools=[DuckDuckGo()],
+ show_tool_calls=True,
+ markdown=True
+)
+
+assistant.print_response("What's happening in France? Summarize top stories with sources, very short and concise.", stream=False)
+```
+
+### Using GroqCall with Requests
+
+#### FuncHub: Schema-less Function Calls
+
+GroqCall introduces FuncHub, which allows you to make function calls without passing the function schema.
+
+```python
+import requests
+
+api_key = "YOUR_GROQ_API_KEY"
+header = {
+ "Authorization": f"Bearer {api_key}",
+ "Content-Type": "application/json"
+}
+
+proxy_url = "https://groqcall.ai/proxy/groq/v1/chat/completions" # or "http://localhost:8000/proxy/groq/v1/chat/completions" if running locally
+
+request = {
+ "messages": [
+ {
+ "role": "system",
+ "content": "YOU MUST FOLLOW THESE INSTRUCTIONS CAREFULLY.\n\n1. Use markdown to format your answers.\n"
+ },
+ {
+ "role": "user",
+ "content": "What's happening in France? Summarize top stories with sources, very short and concise."
+ }
+ ],
+ "model": "mixtral-8x7b-32768",
+ "tool_choice": "auto",
+ "tools": [
+ {
+ "type": "function",
+ "function": {
+ "name": "duckduck.search"
+ }
+ },
+ {
+ "type": "function",
+ "function": {
+ "name": "duckduck.news"
+ }
+ }
+ ]
+}
+
+response = requests.post(
+ proxy_url,
+ headers=header,
+ json=request
+)
+
+print(response.json()["choices"][0]["message"]["content"])
+```
+
+- If you notice, the function schema is not passed in the request. This is because GroqCall uses FuncHub to automatically detect and call the function based on the function name in the cloud, Therefore you dont't need to parse the first response, call the function, and pass again. Check "functions" folder to add your own functions. I will create more examples in the close future to explain how to add your own functions.
+
+#### Passing Function Schemas
+
+If you prefer to pass your own function schemas, refer to the [Function Schema example](https://github.com/unclecode/groqcall/blob/main/cookbook/function_call_with_schema.py) in the cookbook.
+
+#### Rune proxy with Ollama locally
+
+Function call proxy can be used with Ollama. You should first install Ollama and run it locally. Then refer to the [Ollama example](https://github.com/unclecode/groqcall/blob/main/cookbook/function_call_ollama.py) in the cookbook.
+
+## Cookbook
+
+Explore the [Cookbook](https://github.com/unclecode/groqcall/tree/main/cookbook) for more examples and use cases of GroqCall.
+
+## Motivation
+
+Groq is a startup that designs highly specialized processor chips aimed specifically at running inference on large language models. They've introduced what they call the Language Processing Unit (LPU), and the speed is astounding—capable of producing 500 to 800 tokens per second or more.
+
+As an admirer of Groq and their community, I built this proxy to enable function calls using the OpenAI interface, allowing it to be called from any library. This engineering workaround has proven to be immensely useful in my company for various projects.
+
+## Contributing
+
+Contributions are welcome! If you have ideas, suggestions, or would like to contribute to this project, please reach out to me on Twitter (X) @unclecode or via email at unclecode@kidocode.com.
+
+Let's collaborate and make this repository even more awesome! 🚀
+
+## License
+
+This project is licensed under the Apache License 2.0. See [LICENSE](https://github.com/unclecode/groqcall/blob/main/LICENSE) for more information.
\ No newline at end of file
diff --git a/frontend/assets/markdown.css b/frontend/assets/markdown.css
new file mode 100644
index 0000000..a24996e
--- /dev/null
+++ b/frontend/assets/markdown.css
@@ -0,0 +1,1196 @@
+@media (prefers-color-scheme: dark) {
+ .markdown-body,
+ [data-theme="dark"] {
+ /*dark*/
+ color-scheme: dark;
+ --color-prettylights-syntax-comment: #8b949e;
+ --color-prettylights-syntax-constant: #79c0ff;
+ --color-prettylights-syntax-entity: #d2a8ff;
+ --color-prettylights-syntax-storage-modifier-import: #c9d1d9;
+ --color-prettylights-syntax-entity-tag: #7ee787;
+ --color-prettylights-syntax-keyword: #ff7b72;
+ --color-prettylights-syntax-string: #a5d6ff;
+ --color-prettylights-syntax-variable: #ffa657;
+ --color-prettylights-syntax-brackethighlighter-unmatched: #f85149;
+ --color-prettylights-syntax-invalid-illegal-text: #f0f6fc;
+ --color-prettylights-syntax-invalid-illegal-bg: #8e1519;
+ --color-prettylights-syntax-carriage-return-text: #f0f6fc;
+ --color-prettylights-syntax-carriage-return-bg: #b62324;
+ --color-prettylights-syntax-string-regexp: #7ee787;
+ --color-prettylights-syntax-markup-list: #f2cc60;
+ --color-prettylights-syntax-markup-heading: #1f6feb;
+ --color-prettylights-syntax-markup-italic: #c9d1d9;
+ --color-prettylights-syntax-markup-bold: #c9d1d9;
+ --color-prettylights-syntax-markup-deleted-text: #ffdcd7;
+ --color-prettylights-syntax-markup-deleted-bg: #67060c;
+ --color-prettylights-syntax-markup-inserted-text: #aff5b4;
+ --color-prettylights-syntax-markup-inserted-bg: #033a16;
+ --color-prettylights-syntax-markup-changed-text: #ffdfb6;
+ --color-prettylights-syntax-markup-changed-bg: #5a1e02;
+ --color-prettylights-syntax-markup-ignored-text: #c9d1d9;
+ --color-prettylights-syntax-markup-ignored-bg: #1158c7;
+ --color-prettylights-syntax-meta-diff-range: #d2a8ff;
+ --color-prettylights-syntax-brackethighlighter-angle: #8b949e;
+ --color-prettylights-syntax-sublimelinter-gutter-mark: #484f58;
+ --color-prettylights-syntax-constant-other-reference-link: #a5d6ff;
+ --color-fg-default: #e6edf3;
+ --color-fg-muted: #848d97;
+ --color-fg-subtle: #6e7681;
+ --color-canvas-default: #0d1117;
+ --color-canvas-subtle: #161b22;
+ --color-border-default: #30363d;
+ --color-border-muted: #21262d;
+ --color-neutral-muted: rgba(110,118,129,0.4);
+ --color-accent-fg: #2f81f7;
+ --color-accent-emphasis: #1f6feb;
+ --color-success-fg: #3fb950;
+ --color-success-emphasis: #238636;
+ --color-attention-fg: #d29922;
+ --color-attention-emphasis: #9e6a03;
+ --color-attention-subtle: rgba(187,128,9,0.15);
+ --color-danger-fg: #f85149;
+ --color-danger-emphasis: #da3633;
+ --color-done-fg: #a371f7;
+ --color-done-emphasis: #8957e5;
+ }
+ }
+
+ @media (prefers-color-scheme: light) {
+ .markdown-body,
+ [data-theme="light"] {
+ /*light*/
+ color-scheme: light;
+ --color-prettylights-syntax-comment: #57606a;
+ --color-prettylights-syntax-constant: #0550ae;
+ --color-prettylights-syntax-entity: #6639ba;
+ --color-prettylights-syntax-storage-modifier-import: #24292f;
+ --color-prettylights-syntax-entity-tag: #116329;
+ --color-prettylights-syntax-keyword: #cf222e;
+ --color-prettylights-syntax-string: #0a3069;
+ --color-prettylights-syntax-variable: #953800;
+ --color-prettylights-syntax-brackethighlighter-unmatched: #82071e;
+ --color-prettylights-syntax-invalid-illegal-text: #f6f8fa;
+ --color-prettylights-syntax-invalid-illegal-bg: #82071e;
+ --color-prettylights-syntax-carriage-return-text: #f6f8fa;
+ --color-prettylights-syntax-carriage-return-bg: #cf222e;
+ --color-prettylights-syntax-string-regexp: #116329;
+ --color-prettylights-syntax-markup-list: #3b2300;
+ --color-prettylights-syntax-markup-heading: #0550ae;
+ --color-prettylights-syntax-markup-italic: #24292f;
+ --color-prettylights-syntax-markup-bold: #24292f;
+ --color-prettylights-syntax-markup-deleted-text: #82071e;
+ --color-prettylights-syntax-markup-deleted-bg: #ffebe9;
+ --color-prettylights-syntax-markup-inserted-text: #116329;
+ --color-prettylights-syntax-markup-inserted-bg: #dafbe1;
+ --color-prettylights-syntax-markup-changed-text: #953800;
+ --color-prettylights-syntax-markup-changed-bg: #ffd8b5;
+ --color-prettylights-syntax-markup-ignored-text: #eaeef2;
+ --color-prettylights-syntax-markup-ignored-bg: #0550ae;
+ --color-prettylights-syntax-meta-diff-range: #8250df;
+ --color-prettylights-syntax-brackethighlighter-angle: #57606a;
+ --color-prettylights-syntax-sublimelinter-gutter-mark: #8c959f;
+ --color-prettylights-syntax-constant-other-reference-link: #0a3069;
+ --color-fg-default: #1F2328;
+ --color-fg-muted: #656d76;
+ --color-fg-subtle: #6e7781;
+ --color-canvas-default: #ffffff;
+ --color-canvas-subtle: #f6f8fa;
+ --color-border-default: #d0d7de;
+ --color-border-muted: hsla(210,18%,87%,1);
+ --color-neutral-muted: rgba(175,184,193,0.2);
+ --color-accent-fg: #0969da;
+ --color-accent-emphasis: #0969da;
+ --color-success-fg: #1a7f37;
+ --color-success-emphasis: #1f883d;
+ --color-attention-fg: #9a6700;
+ --color-attention-emphasis: #9a6700;
+ --color-attention-subtle: #fff8c5;
+ --color-danger-fg: #d1242f;
+ --color-danger-emphasis: #cf222e;
+ --color-done-fg: #8250df;
+ --color-done-emphasis: #8250df;
+ }
+ }
+
+ .markdown-body {
+ -ms-text-size-adjust: 100%;
+ -webkit-text-size-adjust: 100%;
+ margin: 0;
+ color: var(--color-fg-default);
+ background-color: var(--color-canvas-default);
+ font-family: -apple-system,BlinkMacSystemFont,"Segoe UI","Noto Sans",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji";
+ font-size: 16px;
+ line-height: 1.5;
+ word-wrap: break-word;
+ }
+
+ .markdown-body .octicon {
+ display: inline-block;
+ fill: currentColor;
+ vertical-align: text-bottom;
+ }
+
+ .markdown-body h1:hover .anchor .octicon-link:before,
+ .markdown-body h2:hover .anchor .octicon-link:before,
+ .markdown-body h3:hover .anchor .octicon-link:before,
+ .markdown-body h4:hover .anchor .octicon-link:before,
+ .markdown-body h5:hover .anchor .octicon-link:before,
+ .markdown-body h6:hover .anchor .octicon-link:before {
+ width: 16px;
+ height: 16px;
+ content: ' ';
+ display: inline-block;
+ background-color: currentColor;
+ -webkit-mask-image: url("data:image/svg+xml,");
+ mask-image: url("data:image/svg+xml,");
+ }
+
+ .markdown-body details,
+ .markdown-body figcaption,
+ .markdown-body figure {
+ display: block;
+ }
+
+ .markdown-body summary {
+ display: list-item;
+ }
+
+ .markdown-body [hidden] {
+ display: none !important;
+ }
+
+ .markdown-body a {
+ background-color: transparent;
+ color: var(--color-accent-fg);
+ text-decoration: none;
+ }
+
+ .markdown-body abbr[title] {
+ border-bottom: none;
+ -webkit-text-decoration: underline dotted;
+ text-decoration: underline dotted;
+ }
+
+ .markdown-body b,
+ .markdown-body strong {
+ font-weight: var(--base-text-weight-semibold, 600);
+ }
+
+ .markdown-body dfn {
+ font-style: italic;
+ }
+
+ .markdown-body h1 {
+ margin: .67em 0;
+ font-weight: var(--base-text-weight-semibold, 600);
+ padding-bottom: .3em;
+ font-size: 2em;
+ border-bottom: 1px solid var(--color-border-muted);
+ }
+
+ .markdown-body mark {
+ background-color: var(--color-attention-subtle);
+ color: var(--color-fg-default);
+ }
+
+ .markdown-body small {
+ font-size: 90%;
+ }
+
+ .markdown-body sub,
+ .markdown-body sup {
+ font-size: 75%;
+ line-height: 0;
+ position: relative;
+ vertical-align: baseline;
+ }
+
+ .markdown-body sub {
+ bottom: -0.25em;
+ }
+
+ .markdown-body sup {
+ top: -0.5em;
+ }
+
+ .markdown-body img {
+ border-style: none;
+ max-width: 100%;
+ box-sizing: content-box;
+ background-color: var(--color-canvas-default);
+ }
+
+ .markdown-body code,
+ .markdown-body kbd,
+ .markdown-body pre,
+ .markdown-body samp {
+ font-family: monospace;
+ font-size: 1em;
+ }
+
+ .markdown-body figure {
+ margin: 1em 40px;
+ }
+
+ .markdown-body hr {
+ box-sizing: content-box;
+ overflow: hidden;
+ background: transparent;
+ border-bottom: 1px solid var(--color-border-muted);
+ height: .25em;
+ padding: 0;
+ margin: 24px 0;
+ background-color: var(--color-border-default);
+ border: 0;
+ }
+
+ .markdown-body input {
+ font: inherit;
+ margin: 0;
+ overflow: visible;
+ font-family: inherit;
+ font-size: inherit;
+ line-height: inherit;
+ }
+
+ .markdown-body [type=button],
+ .markdown-body [type=reset],
+ .markdown-body [type=submit] {
+ -webkit-appearance: button;
+ appearance: button;
+ }
+
+ .markdown-body [type=checkbox],
+ .markdown-body [type=radio] {
+ box-sizing: border-box;
+ padding: 0;
+ }
+
+ .markdown-body [type=number]::-webkit-inner-spin-button,
+ .markdown-body [type=number]::-webkit-outer-spin-button {
+ height: auto;
+ }
+
+ .markdown-body [type=search]::-webkit-search-cancel-button,
+ .markdown-body [type=search]::-webkit-search-decoration {
+ -webkit-appearance: none;
+ appearance: none;
+ }
+
+ .markdown-body ::-webkit-input-placeholder {
+ color: inherit;
+ opacity: .54;
+ }
+
+ .markdown-body ::-webkit-file-upload-button {
+ -webkit-appearance: button;
+ appearance: button;
+ font: inherit;
+ }
+
+ .markdown-body a:hover {
+ text-decoration: underline;
+ }
+
+ .markdown-body ::placeholder {
+ color: var(--color-fg-subtle);
+ opacity: 1;
+ }
+
+ .markdown-body hr::before {
+ display: table;
+ content: "";
+ }
+
+ .markdown-body hr::after {
+ display: table;
+ clear: both;
+ content: "";
+ }
+
+ .markdown-body table {
+ border-spacing: 0;
+ border-collapse: collapse;
+ display: block;
+ width: max-content;
+ max-width: 100%;
+ overflow: auto;
+ }
+
+ .markdown-body td,
+ .markdown-body th {
+ padding: 0;
+ }
+
+ .markdown-body details summary {
+ cursor: pointer;
+ }
+
+ .markdown-body details:not([open])>*:not(summary) {
+ display: none !important;
+ }
+
+ .markdown-body a:focus,
+ .markdown-body [role=button]:focus,
+ .markdown-body input[type=radio]:focus,
+ .markdown-body input[type=checkbox]:focus {
+ outline: 2px solid var(--color-accent-fg);
+ outline-offset: -2px;
+ box-shadow: none;
+ }
+
+ .markdown-body a:focus:not(:focus-visible),
+ .markdown-body [role=button]:focus:not(:focus-visible),
+ .markdown-body input[type=radio]:focus:not(:focus-visible),
+ .markdown-body input[type=checkbox]:focus:not(:focus-visible) {
+ outline: solid 1px transparent;
+ }
+
+ .markdown-body a:focus-visible,
+ .markdown-body [role=button]:focus-visible,
+ .markdown-body input[type=radio]:focus-visible,
+ .markdown-body input[type=checkbox]:focus-visible {
+ outline: 2px solid var(--color-accent-fg);
+ outline-offset: -2px;
+ box-shadow: none;
+ }
+
+ .markdown-body a:not([class]):focus,
+ .markdown-body a:not([class]):focus-visible,
+ .markdown-body input[type=radio]:focus,
+ .markdown-body input[type=radio]:focus-visible,
+ .markdown-body input[type=checkbox]:focus,
+ .markdown-body input[type=checkbox]:focus-visible {
+ outline-offset: 0;
+ }
+
+ .markdown-body kbd {
+ display: inline-block;
+ padding: 3px 5px;
+ font: 11px ui-monospace,SFMono-Regular,SF Mono,Menlo,Consolas,Liberation Mono,monospace;
+ line-height: 10px;
+ color: var(--color-fg-default);
+ vertical-align: middle;
+ background-color: var(--color-canvas-subtle);
+ border: solid 1px var(--color-neutral-muted);
+ border-bottom-color: var(--color-neutral-muted);
+ border-radius: 6px;
+ box-shadow: inset 0 -1px 0 var(--color-neutral-muted);
+ }
+
+ .markdown-body h1,
+ .markdown-body h2,
+ .markdown-body h3,
+ .markdown-body h4,
+ .markdown-body h5,
+ .markdown-body h6 {
+ margin-top: 24px;
+ margin-bottom: 16px;
+ font-weight: var(--base-text-weight-semibold, 600);
+ line-height: 1.25;
+ }
+
+ .markdown-body h2 {
+ font-weight: var(--base-text-weight-semibold, 600);
+ padding-bottom: .3em;
+ font-size: 1.5em;
+ border-bottom: 1px solid var(--color-border-muted);
+ }
+
+ .markdown-body h3 {
+ font-weight: var(--base-text-weight-semibold, 600);
+ font-size: 1.25em;
+ }
+
+ .markdown-body h4 {
+ font-weight: var(--base-text-weight-semibold, 600);
+ font-size: 1em;
+ }
+
+ .markdown-body h5 {
+ font-weight: var(--base-text-weight-semibold, 600);
+ font-size: .875em;
+ }
+
+ .markdown-body h6 {
+ font-weight: var(--base-text-weight-semibold, 600);
+ font-size: .85em;
+ color: var(--color-fg-muted);
+ }
+
+ .markdown-body p {
+ margin-top: 0;
+ margin-bottom: 10px;
+ }
+
+ .markdown-body blockquote {
+ margin: 0;
+ padding: 0 1em;
+ color: var(--color-fg-muted);
+ border-left: .25em solid var(--color-border-default);
+ }
+
+ .markdown-body ul,
+ .markdown-body ol {
+ margin-top: 0;
+ margin-bottom: 0;
+ padding-left: 2em;
+ }
+
+ .markdown-body ol ol,
+ .markdown-body ul ol {
+ list-style-type: lower-roman;
+ }
+
+ .markdown-body ul ul ol,
+ .markdown-body ul ol ol,
+ .markdown-body ol ul ol,
+ .markdown-body ol ol ol {
+ list-style-type: lower-alpha;
+ }
+
+ .markdown-body dd {
+ margin-left: 0;
+ }
+
+ .markdown-body tt,
+ .markdown-body code,
+ .markdown-body samp {
+ font-family: ui-monospace,SFMono-Regular,SF Mono,Menlo,Consolas,Liberation Mono,monospace;
+ font-size: 12px;
+ }
+
+ .markdown-body pre {
+ margin-top: 0;
+ margin-bottom: 0;
+ font-family: ui-monospace,SFMono-Regular,SF Mono,Menlo,Consolas,Liberation Mono,monospace;
+ font-size: 12px;
+ word-wrap: normal;
+ }
+
+ .markdown-body .octicon {
+ display: inline-block;
+ overflow: visible !important;
+ vertical-align: text-bottom;
+ fill: currentColor;
+ }
+
+ .markdown-body input::-webkit-outer-spin-button,
+ .markdown-body input::-webkit-inner-spin-button {
+ margin: 0;
+ -webkit-appearance: none;
+ appearance: none;
+ }
+
+ .markdown-body .mr-2 {
+ margin-right: var(--base-size-8, 8px) !important;
+ }
+
+ .markdown-body::before {
+ display: table;
+ content: "";
+ }
+
+ .markdown-body::after {
+ display: table;
+ clear: both;
+ content: "";
+ }
+
+ .markdown-body>*:first-child {
+ margin-top: 0 !important;
+ }
+
+ .markdown-body>*:last-child {
+ margin-bottom: 0 !important;
+ }
+
+ .markdown-body a:not([href]) {
+ color: inherit;
+ text-decoration: none;
+ }
+
+ .markdown-body .absent {
+ color: var(--color-danger-fg);
+ }
+
+ .markdown-body .anchor {
+ float: left;
+ padding-right: 4px;
+ margin-left: -20px;
+ line-height: 1;
+ }
+
+ .markdown-body .anchor:focus {
+ outline: none;
+ }
+
+ .markdown-body p,
+ .markdown-body blockquote,
+ .markdown-body ul,
+ .markdown-body ol,
+ .markdown-body dl,
+ .markdown-body table,
+ .markdown-body pre,
+ .markdown-body details {
+ margin-top: 0;
+ margin-bottom: 16px;
+ }
+
+ .markdown-body blockquote>:first-child {
+ margin-top: 0;
+ }
+
+ .markdown-body blockquote>:last-child {
+ margin-bottom: 0;
+ }
+
+ .markdown-body h1 .octicon-link,
+ .markdown-body h2 .octicon-link,
+ .markdown-body h3 .octicon-link,
+ .markdown-body h4 .octicon-link,
+ .markdown-body h5 .octicon-link,
+ .markdown-body h6 .octicon-link {
+ color: var(--color-fg-default);
+ vertical-align: middle;
+ visibility: hidden;
+ }
+
+ .markdown-body h1:hover .anchor,
+ .markdown-body h2:hover .anchor,
+ .markdown-body h3:hover .anchor,
+ .markdown-body h4:hover .anchor,
+ .markdown-body h5:hover .anchor,
+ .markdown-body h6:hover .anchor {
+ text-decoration: none;
+ }
+
+ .markdown-body h1:hover .anchor .octicon-link,
+ .markdown-body h2:hover .anchor .octicon-link,
+ .markdown-body h3:hover .anchor .octicon-link,
+ .markdown-body h4:hover .anchor .octicon-link,
+ .markdown-body h5:hover .anchor .octicon-link,
+ .markdown-body h6:hover .anchor .octicon-link {
+ visibility: visible;
+ }
+
+ .markdown-body h1 tt,
+ .markdown-body h1 code,
+ .markdown-body h2 tt,
+ .markdown-body h2 code,
+ .markdown-body h3 tt,
+ .markdown-body h3 code,
+ .markdown-body h4 tt,
+ .markdown-body h4 code,
+ .markdown-body h5 tt,
+ .markdown-body h5 code,
+ .markdown-body h6 tt,
+ .markdown-body h6 code {
+ padding: 0 .2em;
+ font-size: inherit;
+ }
+
+ .markdown-body summary h1,
+ .markdown-body summary h2,
+ .markdown-body summary h3,
+ .markdown-body summary h4,
+ .markdown-body summary h5,
+ .markdown-body summary h6 {
+ display: inline-block;
+ }
+
+ .markdown-body summary h1 .anchor,
+ .markdown-body summary h2 .anchor,
+ .markdown-body summary h3 .anchor,
+ .markdown-body summary h4 .anchor,
+ .markdown-body summary h5 .anchor,
+ .markdown-body summary h6 .anchor {
+ margin-left: -40px;
+ }
+
+ .markdown-body summary h1,
+ .markdown-body summary h2 {
+ padding-bottom: 0;
+ border-bottom: 0;
+ }
+
+ .markdown-body ul.no-list,
+ .markdown-body ol.no-list {
+ padding: 0;
+ list-style-type: none;
+ }
+
+ .markdown-body ol[type="a s"] {
+ list-style-type: lower-alpha;
+ }
+
+ .markdown-body ol[type="A s"] {
+ list-style-type: upper-alpha;
+ }
+
+ .markdown-body ol[type="i s"] {
+ list-style-type: lower-roman;
+ }
+
+ .markdown-body ol[type="I s"] {
+ list-style-type: upper-roman;
+ }
+
+ .markdown-body ol[type="1"] {
+ list-style-type: decimal;
+ }
+
+ .markdown-body div>ol:not([type]) {
+ list-style-type: decimal;
+ }
+
+ .markdown-body ul ul,
+ .markdown-body ul ol,
+ .markdown-body ol ol,
+ .markdown-body ol ul {
+ margin-top: 0;
+ margin-bottom: 0;
+ }
+
+ .markdown-body li>p {
+ margin-top: 16px;
+ }
+
+ .markdown-body li+li {
+ margin-top: .25em;
+ }
+
+ .markdown-body dl {
+ padding: 0;
+ }
+
+ .markdown-body dl dt {
+ padding: 0;
+ margin-top: 16px;
+ font-size: 1em;
+ font-style: italic;
+ font-weight: var(--base-text-weight-semibold, 600);
+ }
+
+ .markdown-body dl dd {
+ padding: 0 16px;
+ margin-bottom: 16px;
+ }
+
+ .markdown-body table th {
+ font-weight: var(--base-text-weight-semibold, 600);
+ }
+
+ .markdown-body table th,
+ .markdown-body table td {
+ padding: 6px 13px;
+ border: 1px solid var(--color-border-default);
+ }
+
+ .markdown-body table td>:last-child {
+ margin-bottom: 0;
+ }
+
+ .markdown-body table tr {
+ background-color: var(--color-canvas-default);
+ border-top: 1px solid var(--color-border-muted);
+ }
+
+ .markdown-body table tr:nth-child(2n) {
+ background-color: var(--color-canvas-subtle);
+ }
+
+ .markdown-body table img {
+ background-color: transparent;
+ }
+
+ .markdown-body img[align=right] {
+ padding-left: 20px;
+ }
+
+ .markdown-body img[align=left] {
+ padding-right: 20px;
+ }
+
+ .markdown-body .emoji {
+ max-width: none;
+ vertical-align: text-top;
+ background-color: transparent;
+ }
+
+ .markdown-body span.frame {
+ display: block;
+ overflow: hidden;
+ }
+
+ .markdown-body span.frame>span {
+ display: block;
+ float: left;
+ width: auto;
+ padding: 7px;
+ margin: 13px 0 0;
+ overflow: hidden;
+ border: 1px solid var(--color-border-default);
+ }
+
+ .markdown-body span.frame span img {
+ display: block;
+ float: left;
+ }
+
+ .markdown-body span.frame span span {
+ display: block;
+ padding: 5px 0 0;
+ clear: both;
+ color: var(--color-fg-default);
+ }
+
+ .markdown-body span.align-center {
+ display: block;
+ overflow: hidden;
+ clear: both;
+ }
+
+ .markdown-body span.align-center>span {
+ display: block;
+ margin: 13px auto 0;
+ overflow: hidden;
+ text-align: center;
+ }
+
+ .markdown-body span.align-center span img {
+ margin: 0 auto;
+ text-align: center;
+ }
+
+ .markdown-body span.align-right {
+ display: block;
+ overflow: hidden;
+ clear: both;
+ }
+
+ .markdown-body span.align-right>span {
+ display: block;
+ margin: 13px 0 0;
+ overflow: hidden;
+ text-align: right;
+ }
+
+ .markdown-body span.align-right span img {
+ margin: 0;
+ text-align: right;
+ }
+
+ .markdown-body span.float-left {
+ display: block;
+ float: left;
+ margin-right: 13px;
+ overflow: hidden;
+ }
+
+ .markdown-body span.float-left span {
+ margin: 13px 0 0;
+ }
+
+ .markdown-body span.float-right {
+ display: block;
+ float: right;
+ margin-left: 13px;
+ overflow: hidden;
+ }
+
+ .markdown-body span.float-right>span {
+ display: block;
+ margin: 13px auto 0;
+ overflow: hidden;
+ text-align: right;
+ }
+
+ .markdown-body code,
+ .markdown-body tt {
+ padding: .2em .4em;
+ margin: 0;
+ font-size: 85%;
+ white-space: break-spaces;
+ background-color: var(--color-neutral-muted);
+ border-radius: 6px;
+ }
+
+ .markdown-body code br,
+ .markdown-body tt br {
+ display: none;
+ }
+
+ .markdown-body del code {
+ text-decoration: inherit;
+ }
+
+ .markdown-body samp {
+ font-size: 85%;
+ }
+
+ .markdown-body pre code {
+ font-size: 100%;
+ }
+
+ .markdown-body pre>code {
+ padding: 0;
+ margin: 0;
+ word-break: normal;
+ white-space: pre;
+ background: transparent;
+ border: 0;
+ }
+
+ .markdown-body .highlight {
+ margin-bottom: 16px;
+ }
+
+ .markdown-body .highlight pre {
+ margin-bottom: 0;
+ word-break: normal;
+ }
+
+ .markdown-body .highlight pre,
+ .markdown-body pre {
+ padding: 16px;
+ overflow: auto;
+ font-size: 85%;
+ line-height: 1.45;
+ color: var(--color-fg-default);
+ background-color: var(--color-canvas-subtle);
+ border-radius: 6px;
+ }
+
+ .markdown-body pre code,
+ .markdown-body pre tt {
+ display: inline;
+ max-width: auto;
+ padding: 0;
+ margin: 0;
+ overflow: visible;
+ line-height: inherit;
+ word-wrap: normal;
+ background-color: transparent;
+ border: 0;
+ }
+
+ .markdown-body .csv-data td,
+ .markdown-body .csv-data th {
+ padding: 5px;
+ overflow: hidden;
+ font-size: 12px;
+ line-height: 1;
+ text-align: left;
+ white-space: nowrap;
+ }
+
+ .markdown-body .csv-data .blob-num {
+ padding: 10px 8px 9px;
+ text-align: right;
+ background: var(--color-canvas-default);
+ border: 0;
+ }
+
+ .markdown-body .csv-data tr {
+ border-top: 0;
+ }
+
+ .markdown-body .csv-data th {
+ font-weight: var(--base-text-weight-semibold, 600);
+ background: var(--color-canvas-subtle);
+ border-top: 0;
+ }
+
+ .markdown-body [data-footnote-ref]::before {
+ content: "[";
+ }
+
+ .markdown-body [data-footnote-ref]::after {
+ content: "]";
+ }
+
+ .markdown-body .footnotes {
+ font-size: 12px;
+ color: var(--color-fg-muted);
+ border-top: 1px solid var(--color-border-default);
+ }
+
+ .markdown-body .footnotes ol {
+ padding-left: 16px;
+ }
+
+ .markdown-body .footnotes ol ul {
+ display: inline-block;
+ padding-left: 16px;
+ margin-top: 16px;
+ }
+
+ .markdown-body .footnotes li {
+ position: relative;
+ }
+
+ .markdown-body .footnotes li:target::before {
+ position: absolute;
+ top: -8px;
+ right: -8px;
+ bottom: -8px;
+ left: -24px;
+ pointer-events: none;
+ content: "";
+ border: 2px solid var(--color-accent-emphasis);
+ border-radius: 6px;
+ }
+
+ .markdown-body .footnotes li:target {
+ color: var(--color-fg-default);
+ }
+
+ .markdown-body .footnotes .data-footnote-backref g-emoji {
+ font-family: monospace;
+ }
+
+ .markdown-body .pl-c {
+ color: var(--color-prettylights-syntax-comment);
+ }
+
+ .markdown-body .pl-c1,
+ .markdown-body .pl-s .pl-v {
+ color: var(--color-prettylights-syntax-constant);
+ }
+
+ .markdown-body .pl-e,
+ .markdown-body .pl-en {
+ color: var(--color-prettylights-syntax-entity);
+ }
+
+ .markdown-body .pl-smi,
+ .markdown-body .pl-s .pl-s1 {
+ color: var(--color-prettylights-syntax-storage-modifier-import);
+ }
+
+ .markdown-body .pl-ent {
+ color: var(--color-prettylights-syntax-entity-tag);
+ }
+
+ .markdown-body .pl-k {
+ color: var(--color-prettylights-syntax-keyword);
+ }
+
+ .markdown-body .pl-s,
+ .markdown-body .pl-pds,
+ .markdown-body .pl-s .pl-pse .pl-s1,
+ .markdown-body .pl-sr,
+ .markdown-body .pl-sr .pl-cce,
+ .markdown-body .pl-sr .pl-sre,
+ .markdown-body .pl-sr .pl-sra {
+ color: var(--color-prettylights-syntax-string);
+ }
+
+ .markdown-body .pl-v,
+ .markdown-body .pl-smw {
+ color: var(--color-prettylights-syntax-variable);
+ }
+
+ .markdown-body .pl-bu {
+ color: var(--color-prettylights-syntax-brackethighlighter-unmatched);
+ }
+
+ .markdown-body .pl-ii {
+ color: var(--color-prettylights-syntax-invalid-illegal-text);
+ background-color: var(--color-prettylights-syntax-invalid-illegal-bg);
+ }
+
+ .markdown-body .pl-c2 {
+ color: var(--color-prettylights-syntax-carriage-return-text);
+ background-color: var(--color-prettylights-syntax-carriage-return-bg);
+ }
+
+ .markdown-body .pl-sr .pl-cce {
+ font-weight: bold;
+ color: var(--color-prettylights-syntax-string-regexp);
+ }
+
+ .markdown-body .pl-ml {
+ color: var(--color-prettylights-syntax-markup-list);
+ }
+
+ .markdown-body .pl-mh,
+ .markdown-body .pl-mh .pl-en,
+ .markdown-body .pl-ms {
+ font-weight: bold;
+ color: var(--color-prettylights-syntax-markup-heading);
+ }
+
+ .markdown-body .pl-mi {
+ font-style: italic;
+ color: var(--color-prettylights-syntax-markup-italic);
+ }
+
+ .markdown-body .pl-mb {
+ font-weight: bold;
+ color: var(--color-prettylights-syntax-markup-bold);
+ }
+
+ .markdown-body .pl-md {
+ color: var(--color-prettylights-syntax-markup-deleted-text);
+ background-color: var(--color-prettylights-syntax-markup-deleted-bg);
+ }
+
+ .markdown-body .pl-mi1 {
+ color: var(--color-prettylights-syntax-markup-inserted-text);
+ background-color: var(--color-prettylights-syntax-markup-inserted-bg);
+ }
+
+ .markdown-body .pl-mc {
+ color: var(--color-prettylights-syntax-markup-changed-text);
+ background-color: var(--color-prettylights-syntax-markup-changed-bg);
+ }
+
+ .markdown-body .pl-mi2 {
+ color: var(--color-prettylights-syntax-markup-ignored-text);
+ background-color: var(--color-prettylights-syntax-markup-ignored-bg);
+ }
+
+ .markdown-body .pl-mdr {
+ font-weight: bold;
+ color: var(--color-prettylights-syntax-meta-diff-range);
+ }
+
+ .markdown-body .pl-ba {
+ color: var(--color-prettylights-syntax-brackethighlighter-angle);
+ }
+
+ .markdown-body .pl-sg {
+ color: var(--color-prettylights-syntax-sublimelinter-gutter-mark);
+ }
+
+ .markdown-body .pl-corl {
+ text-decoration: underline;
+ color: var(--color-prettylights-syntax-constant-other-reference-link);
+ }
+
+ .markdown-body g-emoji {
+ display: inline-block;
+ min-width: 1ch;
+ font-family: "Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol";
+ font-size: 1em;
+ font-style: normal !important;
+ font-weight: var(--base-text-weight-normal, 400);
+ line-height: 1;
+ vertical-align: -0.075em;
+ }
+
+ .markdown-body g-emoji img {
+ width: 1em;
+ height: 1em;
+ }
+
+ .markdown-body .task-list-item {
+ list-style-type: none;
+ }
+
+ .markdown-body .task-list-item label {
+ font-weight: var(--base-text-weight-normal, 400);
+ }
+
+ .markdown-body .task-list-item.enabled label {
+ cursor: pointer;
+ }
+
+ .markdown-body .task-list-item+.task-list-item {
+ margin-top: 4px;
+ }
+
+ .markdown-body .task-list-item .handle {
+ display: none;
+ }
+
+ .markdown-body .task-list-item-checkbox {
+ margin: 0 .2em .25em -1.4em;
+ vertical-align: middle;
+ }
+
+ .markdown-body .contains-task-list:dir(rtl) .task-list-item-checkbox {
+ margin: 0 -1.6em .25em .2em;
+ }
+
+ .markdown-body .contains-task-list {
+ position: relative;
+ }
+
+ .markdown-body .contains-task-list:hover .task-list-item-convert-container,
+ .markdown-body .contains-task-list:focus-within .task-list-item-convert-container {
+ display: block;
+ width: auto;
+ height: 24px;
+ overflow: visible;
+ clip: auto;
+ }
+
+ .markdown-body ::-webkit-calendar-picker-indicator {
+ filter: invert(50%);
+ }
+
+ .markdown-body .markdown-alert {
+ padding: var(--base-size-8) var(--base-size-16);
+ margin-bottom: 16px;
+ color: inherit;
+ border-left: .25em solid var(--color-border-default);
+ }
+
+ .markdown-body .markdown-alert>:first-child {
+ margin-top: 0;
+ }
+
+ .markdown-body .markdown-alert>:last-child {
+ margin-bottom: 0;
+ }
+
+ .markdown-body .markdown-alert .markdown-alert-title {
+ display: flex;
+ font-weight: var(--base-text-weight-medium, 500);
+ align-items: center;
+ line-height: 1;
+ }
+
+ .markdown-body .markdown-alert.markdown-alert-note {
+ border-left-color: var(--color-accent-emphasis);
+ }
+
+ .markdown-body .markdown-alert.markdown-alert-note .markdown-alert-title {
+ color: var(--color-accent-fg);
+ }
+
+ .markdown-body .markdown-alert.markdown-alert-important {
+ border-left-color: var(--color-done-emphasis);
+ }
+
+ .markdown-body .markdown-alert.markdown-alert-important .markdown-alert-title {
+ color: var(--color-done-fg);
+ }
+
+ .markdown-body .markdown-alert.markdown-alert-warning {
+ border-left-color: var(--color-attention-emphasis);
+ }
+
+ .markdown-body .markdown-alert.markdown-alert-warning .markdown-alert-title {
+ color: var(--color-attention-fg);
+ }
+
+ .markdown-body .markdown-alert.markdown-alert-tip {
+ border-left-color: var(--color-success-emphasis);
+ }
+
+ .markdown-body .markdown-alert.markdown-alert-tip .markdown-alert-title {
+ color: var(--color-success-fg);
+ }
+
+ .markdown-body .markdown-alert.markdown-alert-caution {
+ border-left-color: var(--color-danger-emphasis);
+ }
+
+ .markdown-body .markdown-alert.markdown-alert-caution .markdown-alert-title {
+ color: var(--color-danger-fg);
+ }
+
\ No newline at end of file
diff --git a/frontend/pages/index.html b/frontend/pages/index.html
index b891b1e..1a023b8 100644
--- a/frontend/pages/index.html
+++ b/frontend/pages/index.html
@@ -8,13 +8,7 @@
- GroqCall is a proxy server that provides function calls for Groq's lightning-fast Language
- Processing Unit (LPU) and other AI providers. Additionally, the upcoming FuncyHub will offer a
- wide range of built-in functions, hosted on the cloud, making it easier to create AI assistants
- without the need to maintain function schemas in the codebase or execute them through multiple
- calls.
-
- Groq is a startup that designs highly specialized processor chips aimed specifically at running
- inference on large language models. They've introduced what they call the Language Processing
- Unit (LPU), and the speed is astounding—capable of producing 500 to 800 tokens per second or
- more. I've become a big fan of Groq and their community;
-
-
- I admire what they're doing. It feels like after discovering electricity, the next challenge is
- moving it around quickly and efficiently. Groq is doing just that for Artificial Intelligence,
- making it easily accessible everywhere. They've opened up their API to the cloud, but as of now,
- they lack a function call capability.
-
-
- Unable to wait for this feature, I built a proxy that enables function calls using the OpenAI
- interface, allowing it to be called from any library. This engineering workaround has proven to
- be immensely useful in my company for various projects. Here's the link to the GitHub repository
- where you can explore and play around with it. I've included some examples in this collaboration
- for you to check out.
-