Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: Colang 2.0 issue when using LangChain #891

Open
3 of 4 tasks
knitzschke opened this issue Dec 3, 2024 · 1 comment
Open
3 of 4 tasks

bug: Colang 2.0 issue when using LangChain #891

knitzschke opened this issue Dec 3, 2024 · 1 comment
Labels
bug Something isn't working status: needs triage New issues that have not yet been reviewed or categorized.

Comments

@knitzschke
Copy link

knitzschke commented Dec 3, 2024

Did you check docs and existing issues?

  • I have read all the NeMo-Guardrails docs
  • I have updated the package to the latest version before submitting this issue
  • (optional) I have used the develop branch
  • I have searched the existing issues of NeMo-Guardrails

Python version (python --version)

Python 3.11.8

Operating system/version

Windows 11 Enterprise

NeMo-Guardrails version (if you must use a specific version and not the latest

0.11.0

nemoguardrails==0.11.0
langchain==0.3.4
langchain-community==0.3.3
langchain-core==0.3.12
langchain-openai==0.2.3

Describe the bug

I am trying to use Colang 2.X in my LangChain app for a beta example. I am using LangChain with Azure OpenAI model endpoint and trying to get the Dialog Rails example in the NVIDIA Docs (hello_world_3) example with llm continuation to work following the example here: https://docs.nvidia.com/nemo/guardrails/colang_2/getting_started/dialog-rails.html

However when I try to invoke the chain to test if the RAILS of "hi" is working I get the following error:

ValueError: The `output_vars` option is not supported for Colang 2.0 configurations.

Which originates from here: https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/nemoguardrails/rails/llm/llmrails.py#L882

I am able to use Colang 1 with success, but unable to use Colang 2 when using it in the LangChain chain. When I use nemoguardrails chat I am able to test out the rail of "hi", however the llm continuation dpesnt seem to work when I try to type in what they have in the example above - akka I am not getting any response back and it just spins.

I have my ./config/main.co file as the following:

import core
import llm

flow main
  activate llm continuation
  activate greeting

flow greeting
  user expressed greeting
  bot express greeting

flow user expressed greeting
  user said "hi" or user said "hello"

flow bot express greeting
  bot say "Hello World! Im working for you!"

within a Jupyter notebook I have the following:

import os
from dotenv import load_dotenv

load_dotenv(override=True)

import nest_asyncio

nest_asyncio.apply()

from langchain_openai import AzureChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import AzureChatOpenAI

from nemoguardrails import RailsConfig, LLMRails
from nemoguardrails.integrations.langchain.runnable_rails import RunnableRails


model = AzureChatOpenAI(
....
# credentials submitted here:
)

output_parser = StrOutputParser()
prompt = ChatPromptTemplate.from_template("{topic}")

chain = prompt | model | output_parser

config = RailsConfig.from_path("./config/")
rails = RunnableRails(config)

chain_with_guardrails = prompt | (rails | model) | output_parser

text = "hi"

chain_with_guardrails.invoke(text)

Steps To Reproduce

Follow printed output above for example set up and jupyter notebook.

Expected Behavior

  1. when trying it in jupyter I could not be getting a error and instead receiving the rails prompt flow indicated in the main.co file.
  2. when testing it in the nemogaurdrail CLI I should be getting a LLM generated response for "how are you", instead of it constantly spinning up new workflows

Actual Behavior

Described above"

However when I try to invoke the chain to test if the RAILS of "hi" is working I get the following error:

ValueError: The `output_vars` option is not supported for Colang 2.0 configurations.

Which originates from here: https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/nemoguardrails/rails/llm/llmrails.py#L882

@knitzschke knitzschke added bug Something isn't working status: needs triage New issues that have not yet been reviewed or categorized. labels Dec 3, 2024
@Pouyanpi
Copy link
Collaborator

Pouyanpi commented Dec 5, 2024

Thank you @knitzschke for reporting this problem. Currently there is a gap that some of the options that work with Colang 1.0 are not available with Colang 2.0. We will eventually add the support in the future releases, probably 0.13.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working status: needs triage New issues that have not yet been reviewed or categorized.
Projects
None yet
Development

No branches or pull requests

2 participants