Skip to content

Functionary run using llama-cpp-python not calling functions properly #317

@Rivridis

Description

@Rivridis

Code Block

from llama_cpp.llama_tokenizer import LlamaHFTokenizer

# We should use HF AutoTokenizer instead of llama.cpp's tokenizer because we found that Llama.cpp's tokenizer doesn't give the same result as that from Huggingface. The reason might be in the training, we added new tokens to the tokenizer and Llama.cpp doesn't handle this successfully

llm = Llama(model_path=r"model\functionary-small-v3.2.Q4_0.gguf", chat_format="functionary-v2",n_ctx=4098,n_gpu_layers=20, tokenizer=LlamaHFTokenizer.from_pretrained("meetkai/functionary-small-v3.2-GGUF"))

messages = [
    {"role": "user", "content": "what's the weather like in Hanoi? What is your name?"}
]
tools = [ # For functionary-7b-v2 we use "tools"; for functionary-7b-v1.4 we use "functions" = [{"name": "get_current_weather", "description":..., "parameters": ....}]
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g., San Francisco, CA"
                    }
                },
                "required": ["location"]
            }
        }
    }
]

result = llm.create_chat_completion(
      messages = messages,
      tools=tools,
      tool_choice="auto",
)

print(result["choices"][0]["message"])

Output

{'role': 'assistant', 'content': None, 'tool_calls': [{'id': 'call_0KtImp3X2miKaQyPMzdO5EB6', 'type': 'function', 'function': {'name': 'all\nI\'m sorry, but I don\'t have a personal name. I\'m here to assist you with any information or questions you have. As for the weather in Hanoi, I can check that for you. Let me just check that for you.>>>get_current_weather\n{"location": "Hanoi, Vietnam"}', 'arguments': '{}'}}]}

This same thing happens with v2.4 as well. The content of the assistant response is empty, and the model uses the function name field to respond with hallucinated values. When I just ask a single question like "What is your name", the model responds in the function name field.

Output

{'role': 'assistant', 'content': None, 'tool_calls': [{'id': 'call_00hfpraXBa8Ag3NNGsnsWPAA', 'type': 'function', 'function': {'name': 'all\nI\'m Functionary, a specialized language model developed by MeetKai Inc. My name is derived from the term "functionary," which refers to a person who performs a specific function or role. My purpose is to assist users by executing functions and providing information based on the tools and data available to me.', 'arguments': '{}'}}]}

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions