diff --git a/docs/concepts/prompt_adaptation.md b/docs/concepts/prompt_adaptation.md index 790f54104..68b90dc1b 100644 --- a/docs/concepts/prompt_adaptation.md +++ b/docs/concepts/prompt_adaptation.md @@ -39,7 +39,6 @@ Create a sample prompt using `Prompt` class. ```{code-block} python from langchain.chat_models import ChatOpenAI -from ragas.llms import LangchainLLMWrapper from ragas.llms.prompt import Prompt noun_extractor = Prompt( @@ -55,7 +54,6 @@ examples=[{ ) openai_model = ChatOpenAI(model_name="gpt-4") -openai_model = LangchainLLMWrapper(llm=openai_model) ``` Prompt adaption is done using the `.adapt` method: diff --git a/docs/howtos/customisations/gcp-vertexai.ipynb b/docs/howtos/customisations/gcp-vertexai.ipynb index 70722c34a..55fab1c5a 100644 --- a/docs/howtos/customisations/gcp-vertexai.ipynb +++ b/docs/howtos/customisations/gcp-vertexai.ipynb @@ -111,9 +111,7 @@ "]\n", "```\n", "\n", - "By default Ragas uses `ChatOpenAI` for evaluations, lets swap that out with `ChatVertextAI`. We also need to change the embeddings used for evaluations for `OpenAIEmbeddings` to `VertextAIEmbeddings` for metrices that need it, which in our case is `answer_relevancy`.\n", - "\n", - "Now in order to use the new `ChatVertextAI` llm instance with Ragas metrics, you have to create a new instance of `RagasLLM` using the `ragas.llms.LangchainLLM` wrapper. Its a simple wrapper around langchain that make Langchain LLM/Chat instances compatible with how Ragas metrics will use them." + "By default Ragas uses `ChatOpenAI` for evaluations, lets swap that out with `ChatVertextAI`. We also need to change the embeddings used for evaluations for `OpenAIEmbeddings` to `VertextAIEmbeddings` for metrices that need it, which in our case is `answer_relevancy`." ] }, { @@ -125,7 +123,6 @@ "source": [ "import google.auth\n", "from langchain.chat_models import ChatVertexAI\n", - "from ragas.llms import LangchainLLM\n", "from langchain.embeddings import VertexAIEmbeddings\n", "\n", "\n", @@ -136,11 +133,8 @@ "# authenticate to GCP\n", "creds, _ = google.auth.default(quota_project_id=\"tmp-project-404003\")\n", "# create Langchain LLM and Embeddings\n", - "chat = ChatVertexAI(credentials=creds)\n", - "vertextai_embeddings = VertexAIEmbeddings(credentials=creds)\n", - "\n", - "# create a wrapper around it\n", - "ragas_vertexai_llm = LangchainLLM(chat)" + "ragas_vertexai_llm = ChatVertexAI(credentials=creds)\n", + "vertextai_embeddings = VertexAIEmbeddings(credentials=creds)" ] }, { diff --git a/docs/howtos/customisations/llms.ipynb b/docs/howtos/customisations/llms.ipynb index d1f34504e..7f96d1a78 100644 --- a/docs/howtos/customisations/llms.ipynb +++ b/docs/howtos/customisations/llms.ipynb @@ -188,7 +188,7 @@ "id": "c9ddf74a-9830-4e1a-a4dd-7e5ec17a71e4", "metadata": {}, "source": [ - "Now lets create an Langchain llm instance and wrap it with `LangchainLLMWrapper` class. Because vLLM can run in OpenAI compatibilitiy mode, we can use the `ChatOpenAI` class as it is with small tweaks." + "Now lets create an Langchain llm instance. Because vLLM can run in OpenAI compatibilitiy mode, we can use the `ChatOpenAI` class as it is with small tweaks." ] }, { @@ -199,7 +199,6 @@ "outputs": [], "source": [ "from langchain_openai.chat_models import ChatOpenAI\n", - "from ragas.llms.base import LangchainLLMWrapper\n", "\n", "inference_server_url = \"http://localhost:8080/v1\"\n", "\n",