Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deepseek-r1 not working with Ollama format correctly #29317

Open
5 tasks done
alyahmedaly opened this issue Jan 20, 2025 · 2 comments
Open
5 tasks done

deepseek-r1 not working with Ollama format correctly #29317

alyahmedaly opened this issue Jan 20, 2025 · 2 comments
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@alyahmedaly
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

No Code needed

Error Message and Stack Trace (if applicable)

There is no exception

Description

I was trying deepseek-r1:14b today with ollama format:Json schema

looks like the integration need to be update as this kind of model when you ask question it start with thinking section before it start answering

so I was getting wrong output mainly the output was part of the thinking process

same code works with Phi4 and llamaa 3.2

System Info

I'm using the js version of langchain

@dosubot dosubot bot added the 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature label Jan 20, 2025
@Hadi2525
Copy link
Contributor

I am having the same issue calling the DeepSeek model.

from langchain_core.prompts import ChatPromptTemplate

query_check_system = """You are a SQL expert with a strong attention to detail.
Double check the SQLite query for common mistakes, including:
- Using NOT IN with NULL values
- Using UNION when UNION ALL should have been used
- Using BETWEEN for exclusive ranges
- Data type mismatch in predicates
- Properly quoting identifiers
- Using the correct number of arguments for functions
- Casting to the correct data type
- Using the proper columns for joins

If there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.

You will call the appropriate tool to execute the query after running this check."""

query_check_prompt = ChatPromptTemplate.from_messages(
    [("system", query_check_system), ("placeholder", "{messages}")]
)
query_check = query_check_prompt | ChatOllama(model="deepseek-r1:1.5b", temperature=0).bind_tools(
    [db_query_tool], tool_choice="required"
)

query_check.invoke({"messages": [("user", "SELECT * FROM Artist LIMIT 10;")]})

These are the issues raised in Ollama: ollama/ollama#8552
ollama/ollama#8517

@alyahmedaly
Copy link
Author

I'm using the js version of langchain with Ollama and I can now use format option to return json I will leave this open as looks like others have different experience for now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

2 participants