Skip to content

Commit

Permalink
prompt command created
Browse files Browse the repository at this point in the history
  • Loading branch information
mgonzs13 committed Jul 3, 2024
1 parent 5fbd5ad commit 98f1ec1
Show file tree
Hide file tree
Showing 4 changed files with 65 additions and 10 deletions.
21 changes: 21 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ This repository provides a set of ROS 2 packages to integrate [llama.cpp](https:
3. [Usage](#usage)
- [Launch Files](#launch-files)
- [ROS 2 Clients](#ros-2-clients)
- [llama_cli](#llama_cli)
- [LangChain](#langchain)
4. [Demos](#demos)

Expand Down Expand Up @@ -342,6 +343,26 @@ class ExampleNode(Node):

</details>

### llama_cli

Commands are included in llama_ros to speed up the test of GGUF-based LLMs within the ROS 2 ecosystem. This way, the following commands are integrating into the ROS 2 commands:

#### launch

Using this command launch a LLM from a YAML file. The configuration of the YAML is used to launch the LLM in the same way as using a regular launch file. Here is an example of how to use it:

```shell
$ ros2 llama launch ~/ros2_ws/src/llama_ros/llama_bringup/params/StableLM-Zephyr.yaml
```

#### prompt

Using this command send a prompt to a launched LLM. The command uses a string, which is the prompt; and the temperature value. Here is an example of how to use it:

```shell
$ ros2 llama prompt "Do you know ROS 2?" -t 0.0
```

### LangChain

There is a [llama_ros integration for LangChain](llama_ros/llama_ros/langchain/). Thus, prompt engineering techniques could be applied. Here you have an example to use it.
Expand Down
29 changes: 29 additions & 0 deletions llama_cli/llama_cli/api/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,21 @@
from launch import LaunchDescription
from llama_bringup.utils import create_llama_launch_from_yaml

import rclpy
from argparse import ArgumentTypeError
from llama_msgs.action import GenerateResponse
from llama_ros.llama_client_node import LlamaClientNode


def positive_float(inval):
try:
ret = float(inval)
except ValueError:
raise ArgumentTypeError("Expects a floating point number")
if ret < 0.0:
raise ArgumentTypeError("Value must be positive")
return ret


def launch_llm(file_path: str) -> None:
ld = LaunchDescription([
Expand All @@ -33,3 +48,17 @@ def launch_llm(file_path: str) -> None:
ls = LaunchService()
ls.include_launch_description(ld)
ls.run()


def prompt_llm(prompt: str, temp: float = 0.8) -> None:

def text_cb(feedback) -> None:
print(feedback.feedback.partial_response.text, end="", flush=True)

rclpy.init()
llama_client = LlamaClientNode()
goal = GenerateResponse.Goal()
goal.prompt = prompt
goal.sampling_config.temp = temp
llama_client.generate_response(goal, text_cb)
rclpy.shutdown()
Original file line number Diff line number Diff line change
Expand Up @@ -21,14 +21,18 @@
# SOFTWARE.


import os
from launch import LaunchDescription
from llama_bringup.utils import create_llama_launch_from_yaml
from ament_index_python.packages import get_package_share_directory
from ros2cli.verb import VerbExtension
from llama_cli.api import prompt_llm, positive_float


def generate_launch_description():
return LaunchDescription([
create_llama_launch_from_yaml(os.path.join(
get_package_share_directory("llama_bringup"), "params", "Llama-3.yaml"))
])
class PromptVerb(VerbExtension):

def add_arguments(self, parser, cli_name):
arg = parser.add_argument(
"prompt", help="prompt text for the LLM")
parser.add_argument(
"-t", "--temp", metavar="N", type=positive_float, default=0.8,
help="Temperature value (default: 0.8)")

def main(self, *, args):
prompt_llm(args.prompt, temp=args.temp)
3 changes: 2 additions & 1 deletion llama_cli/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
],
"llama_cli.verb": [
"launch = llama_cli.verb.launch:LaunchVerb",
]
"prompt = llama_cli.verb.prompt:PromptVerb",
],
}
)

0 comments on commit 98f1ec1

Please sign in to comment.