Skip to content

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Nov 21, 2025

This PR contains the following updates:

Package Change Age Confidence
langchain-core (source, changelog) 0.3.770.3.80 age confidence
langchain-core (source, changelog) 0.3.790.3.80 age confidence

GitHub Vulnerability Alerts

CVE-2025-65106

Context

A template injection vulnerability exists in LangChain's prompt template system that allows attackers to access Python object internals through template syntax. This vulnerability affects applications that accept untrusted template strings (not just template variables) in ChatPromptTemplate and related prompt template classes.

Templates allow attribute access (.) and indexing ([]) but not method invocation (()).

The combination of attribute access and indexing may enable exploitation depending on which objects are passed to templates. When template variables are simple strings (the common case), the impact is limited. However, when using MessagesPlaceholder with chat message objects, attackers can traverse through object attributes and dictionary lookups (e.g., __globals__) to reach sensitive data such as environment variables.

The vulnerability specifically requires that applications accept template strings (the structure) from untrusted sources, not just template variables (the data). Most applications either do not use templates or else use hardcoded templates and are not vulnerable.

Affected Components

  • langchain-core package
  • Template formats:
    • F-string templates (template_format="f-string") - Vulnerability fixed
    • Mustache templates (template_format="mustache") - Defensive hardening
    • Jinja2 templates (template_format="jinja2") - Defensive hardening

Impact

Attackers who can control template strings (not just template variables) can:

  • Access Python object attributes and internal properties via attribute traversal
  • Extract sensitive information from object internals (e.g., __class__, __globals__)
  • Potentially escalate to more severe attacks depending on the objects passed to templates

Attack Vectors

1. F-string Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate

malicious_template = ChatPromptTemplate.from_messages(
    [("human", "{msg.__class__.__name__}")],
    template_format="f-string"
)

# Note that this requires passing a placeholder variable for "msg.__class__.__name__".
result = malicious_template.invoke({"msg": "foo", "msg.__class__.__name__": "safe_placeholder"})

# Previously returned
# >>> result.messages[0].content

# >>> 'str'

2. Mustache Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage

msg = HumanMessage("Hello")

# Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="mustache"
)

result = malicious_template.invoke({"question": msg})

# Previously returned: "HumanMessage" (getattr() exposed internals)

3. Jinja2 Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage

msg = HumanMessage("Hello")

# Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="jinja2"
)

result = malicious_template.invoke({"question": msg})

# Could access non-dunder attributes/methods on objects

Root Cause

  1. F-string templates: The implementation used Python's string.Formatter().parse() to extract variable names from template strings. This method returns the complete field expression, including attribute access syntax:
    from string import Formatter
    
    template = "{msg.__class__} and {x}"
    print([var_name for (_, var_name, _, _) in Formatter().parse(template)])
    # Returns: ['msg.__class__', 'x']
    The extracted names were not validated to ensure they were simple identifiers. As a result, template strings containing attribute traversal and indexing expressions (e.g., {obj.__class__.__name__} or {obj.method.__globals__[os]}) were accepted and subsequently evaluated during formatting. While f-string templates do not support method calls with (), they do support [] indexing, which could allow traversal through dictionaries like __globals__ to reach sensitive objects.
  2. Mustache templates: By design, used getattr() as a fallback to support accessing attributes on objects (e.g., `` on a User object). However, we decided to restrict this to simpler primitives that subclass dict, list, and tuple types as defensive hardening, since untrusted templates could exploit attribute access to reach internal properties like class on arbitrary objects
  3. Jinja2 templates: Jinja2's default SandboxedEnvironment blocks dunder attributes (e.g., __class__) but permits access to other attributes and methods on objects. While Jinja2 templates in LangChain are typically used with trusted template strings, as a defense-in-depth measure, we've restricted the environment to block all attribute and method access on objects
    passed to templates.

Who Is Affected?

High Risk Scenarios

You are affected if your application:

  • Accepts template strings from untrusted sources (user input, external APIs, databases)
  • Dynamically constructs prompt templates based on user-provided patterns
  • Allows users to customize or create prompt templates

Example vulnerable code:

# User controls the template string itself
user_template_string = request.json.get("template")  # DANGEROUS

prompt = ChatPromptTemplate.from_messages(
    [("human", user_template_string)],
    template_format="mustache"
)

result = prompt.invoke({"data": sensitive_object})

Low/No Risk Scenarios

You are NOT affected if:

  • Template strings are hardcoded in your application code
  • Template strings come only from trusted, controlled sources
  • Users can only provide values for template variables, not the template structure itself

Example safe code:

# Template is hardcoded - users only control variables
prompt = ChatPromptTemplate.from_messages(
    [("human", "User question: {question}")],  # SAFE
    template_format="f-string"
)

# User input only fills the 'question' variable
result = prompt.invoke({"question": user_input})

The Fix

F-string Templates

F-string templates had a clear vulnerability where attribute access syntax was exploitable. We've added strict validation to prevent this:

  • Added validation to enforce that variable names must be valid Python identifiers
  • Rejects syntax like {obj.attr}, {obj[0]}, or {obj.__class__}
  • Only allows simple variable names: {variable_name}
# After fix - these are rejected at template creation time
ChatPromptTemplate.from_messages(
    [("human", "{msg.__class__}")],  # ValueError: Invalid variable name
    template_format="f-string"
)

Mustache Templates (Defensive Hardening)

As defensive hardening, we've restricted what Mustache templates support to reduce the attack surface:

  • Replaced getattr() fallback with strict type checking
  • Only allows traversal into dict, list, and tuple types
  • Blocks attribute access on arbitrary Python objects
# After hardening - attribute access returns empty string
prompt = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="mustache"
)
result = prompt.invoke({"msg": HumanMessage("test")})

# Returns: "" (access blocked)

Jinja2 Templates (Defensive Hardening)

As defensive hardening, we've significantly restricted Jinja2 template capabilities:

  • Introduced _RestrictedSandboxedEnvironment that blocks ALL attribute/method access
  • Only allows simple variable lookups from the context dictionary
  • Raises SecurityError on any attribute access attempt
# After hardening - all attribute access is blocked
prompt = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="jinja2"
)

# Raises SecurityError: Access to attributes is not allowed

Important Recommendation: Due to the expressiveness of Jinja2 and the difficulty of fully sandboxing it, we recommend reserving Jinja2 templates for trusted sources only. If you need to accept template strings from untrusted users, use f-string or mustache templates with the new restrictions instead.

While we've hardened the Jinja2 implementation, the nature of templating engines makes comprehensive sandboxing challenging. The safest approach is to only use Jinja2 templates when you control the template source.

Important Reminder: Many applications do not need prompt templates. Templates are useful for variable substitution and dynamic logic (if statements, loops, conditionals). However, if you're building a chatbot or conversational application, you can often work directly with message objects (e.g., HumanMessage, AIMessage, ToolMessage) without templates. Direct message construction avoids template-related security concerns entirely.

Remediation

Immediate Actions

  1. Audit your code for any locations where template strings come from untrusted sources
  2. Update to the patched version of langchain-core
  3. Review template usage to ensure separation between template structure and user data

Best Practices

  • Consider if you need templates at all - Many applications can work directly with message objects (HumanMessage, AIMessage, etc.) without templates## Context

A template injection vulnerability exists in LangChain's prompt template system that allows attackers to access Python object internals through template syntax. This vulnerability affects applications that accept untrusted template strings (not just template variables) in ChatPromptTemplate and related prompt template classes.

Templates allow attribute access (.) and indexing ([]) but not method invocation (()).

The combination of attribute access and indexing may enable exploitation depending on which objects are passed to templates. When template variables are simple strings (the common case), the impact is limited. However, when using MessagesPlaceholder with chat message objects, attackers can traverse through object attributes and dictionary lookups (e.g., __globals__) to reach sensitive data such as environment variables.

The vulnerability specifically requires that applications accept template strings (the structure) from untrusted sources, not just template variables (the data). Most applications either do not use templates or else use hardcoded templates and are not vulnerable.

Affected Components

  • langchain-core package
  • Template formats:
    • F-string templates (template_format="f-string") - Vulnerability fixed
    • Mustache templates (template_format="mustache") - Defensive hardening
    • Jinja2 templates (template_format="jinja2") - Defensive hardening

Impact

Attackers who can control template strings (not just template variables) can:

  • Access Python object attributes and internal properties via attribute traversal
  • Extract sensitive information from object internals (e.g., __class__, __globals__)
  • Potentially escalate to more severe attacks depending on the objects passed to templates

Attack Vectors

1. F-string Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate

malicious_template = ChatPromptTemplate.from_messages(
    [("human", "{msg.__class__.__name__}")],
    template_format="f-string"
)

# Note that this requires passing a placeholder variable for "msg.__class__.__name__".
result = malicious_template.invoke({"msg": "foo", "msg.__class__.__name__": "safe_placeholder"})

# Previously returned
# >>> result.messages[0].content

# >>> 'str'

2. Mustache Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage

msg = HumanMessage("Hello")

# Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="mustache"
)

result = malicious_template.invoke({"question": msg})

# Previously returned: "HumanMessage" (getattr() exposed internals)

3. Jinja2 Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage

msg = HumanMessage("Hello")

# Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="jinja2"
)

result = malicious_template.invoke({"question": msg})

# Could access non-dunder attributes/methods on objects

Root Cause

  1. F-string templates: The implementation used Python's string.Formatter().parse() to extract variable names from template strings. This method returns the complete field expression, including attribute access syntax:
    from string import Formatter
    
    template = "{msg.__class__} and {x}"
    print([var_name for (_, var_name, _, _) in Formatter().parse(template)])
    # Returns: ['msg.__class__', 'x']
    The extracted names were not validated to ensure they were simple identifiers. As a result, template strings containing attribute traversal and indexing expressions (e.g., {obj.__class__.__name__} or {obj.method.__globals__[os]}) were accepted and subsequently evaluated during formatting. While f-string templates do not support method calls with (), they do support [] indexing, which could allow traversal through dictionaries like __globals__ to reach sensitive objects.
  2. Mustache templates: By design, used getattr() as a fallback to support accessing attributes on objects (e.g., `` on a User object). However, we decided to restrict this to simpler primitives that subclass dict, list, and tuple types as defensive hardening, since untrusted templates could exploit attribute access to reach internal properties like class on arbitrary objects
  3. Jinja2 templates: Jinja2's default SandboxedEnvironment blocks dunder attributes (e.g., __class__) but permits access to other attributes and methods on objects. While Jinja2 templates in LangChain are typically used with trusted template strings, as a defense-in-depth measure, we've restricted the environment to block all attribute and method access on objects
    passed to templates.

Who Is Affected?

High Risk Scenarios

You are affected if your application:

  • Accepts template strings from untrusted sources (user input, external APIs, databases)
  • Dynamically constructs prompt templates based on user-provided patterns
  • Allows users to customize or create prompt templates

Example vulnerable code:

# User controls the template string itself
user_template_string = request.json.get("template")  # DANGEROUS

prompt = ChatPromptTemplate.from_messages(
    [("human", user_template_string)],
    template_format="mustache"
)

result = prompt.invoke({"data": sensitive_object})

Low/No Risk Scenarios

You are NOT affected if:

  • Template strings are hardcoded in your application code
  • Template strings come only from trusted, controlled sources
  • Users can only provide values for template variables, not the template structure itself

Example safe code:

# Template is hardcoded - users only control variables
prompt = ChatPromptTemplate.from_messages(
    [("human", "User question: {question}")],  # SAFE
    template_format="f-string"
)

# User input only fills the 'question' variable
result = prompt.invoke({"question": user_input})

The Fix

F-string Templates

F-string templates had a clear vulnerability where attribute access syntax was exploitable. We've added strict validation to prevent this:

  • Added validation to enforce that variable names must be valid Python identifiers
  • Rejects syntax like {obj.attr}, {obj[0]}, or {obj.__class__}
  • Only allows simple variable names: {variable_name}
# After fix - these are rejected at template creation time
ChatPromptTemplate.from_messages(
    [("human", "{msg.__class__}")],  # ValueError: Invalid variable name
    template_format="f-string"
)

Mustache Templates (Defensive Hardening)

As defensive hardening, we've restricted what Mustache templates support to reduce the attack surface:

  • Replaced getattr() fallback with strict type checking
  • Only allows traversal into dict, list, and tuple types
  • Blocks attribute access on arbitrary Python objects
# After hardening - attribute access returns empty string
prompt = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="mustache"
)
result = prompt.invoke({"msg": HumanMessage("test")})

# Returns: "" (access blocked)

Jinja2 Templates (Defensive Hardening)

As defensive hardening, we've significantly restricted Jinja2 template capabilities:

  • Introduced _RestrictedSandboxedEnvironment that blocks ALL attribute/method access
  • Only allows simple variable lookups from the context dictionary
  • Raises SecurityError on any attribute access attempt
# After hardening - all attribute access is blocked
prompt = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="jinja2"
)

# Raises SecurityError: Access to attributes is not allowed

Important Recommendation: Due to the expressiveness of Jinja2 and the difficulty of fully sandboxing it, we recommend reserving Jinja2 templates for trusted sources only. If you need to accept template strings from untrusted users, use f-string or mustache templates with the new restrictions instead.

While we've hardened the Jinja2 implementation, the nature of templating engines makes comprehensive sandboxing challenging. The safest approach is to only use Jinja2 templates when you control the template source.

Important Reminder: Many applications do not need prompt templates. Templates are useful for variable substitution and dynamic logic (if statements, loops, conditionals). However, if you're building a chatbot or conversational application, you can often work directly with message objects (e.g., HumanMessage, AIMessage, ToolMessage) without templates. Direct message construction avoids template-related security concerns entirely.

Remediation

Immediate Actions

  1. Audit your code for any locations where template strings come from untrusted sources
  2. Update to the patched version of langchain-core
  3. Review template usage to ensure separation between template structure and user data

Best Practices

  • Consider if you need templates at all - Many applications can work directly with message objects (HumanMessage, AIMessage, etc.) without templates
  • Reserve Jinja2 for trusted sources - Only use Jinja2 templates when you fully control the template content

Update: Jinja2 Restrictions Reverted

The Jinja2 hardening introduced in the initial patch has been reverted as of langchain-core 1.1.3. The restriction was not addressing a direct vulnerability but was part of broader defensive hardening. In practice, it significantly limited legitimate Jinja2 usage and broke existing templates. Since Jinja2 is intended to be used only with trusted template sources, the original behavior has been restored. Users should continue to avoid accepting untrusted template strings when using Jinja2, but no security issue exists with trusted templates.


Configuration

📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 21, 2025

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

  • 🔍 Trigger a full review

Comment @coderabbitai help to get the list of available commands and usage tips.

@renovate renovate bot force-pushed the renovate/pypi-langchain-core-vulnerability branch 8 times, most recently from 56744d1 to bc30722 Compare November 26, 2025 17:47
@renovate renovate bot force-pushed the renovate/pypi-langchain-core-vulnerability branch 4 times, most recently from bb841b3 to 24a77e7 Compare December 2, 2025 09:57
@renovate renovate bot force-pushed the renovate/pypi-langchain-core-vulnerability branch 3 times, most recently from 3d751b2 to 5608aef Compare December 12, 2025 16:21
@renovate renovate bot force-pushed the renovate/pypi-langchain-core-vulnerability branch 4 times, most recently from fc0ec94 to 402dfcf Compare December 19, 2025 22:55
@github-actions
Copy link

There hasn't been any activity on this pull request recently. Therefore, this pull request has been automatically marked as stale and will be closed if no further activity occurs within seven days. Thank you for your contributions.

@github-actions github-actions bot added stale and removed stale labels Dec 27, 2025
@renovate renovate bot force-pushed the renovate/pypi-langchain-core-vulnerability branch from 402dfcf to 8b615fb Compare January 9, 2026 04:03
@renovate renovate bot force-pushed the renovate/pypi-langchain-core-vulnerability branch 4 times, most recently from 78e9adf to 8996521 Compare January 16, 2026 12:31
@renovate renovate bot force-pushed the renovate/pypi-langchain-core-vulnerability branch 7 times, most recently from 9c3c400 to e9d845f Compare January 26, 2026 22:20
@renovate renovate bot force-pushed the renovate/pypi-langchain-core-vulnerability branch 8 times, most recently from a102ae6 to c997967 Compare February 4, 2026 18:15
@renovate renovate bot force-pushed the renovate/pypi-langchain-core-vulnerability branch 3 times, most recently from 2492774 to 39b413c Compare February 10, 2026 12:19
@renovate renovate bot force-pushed the renovate/pypi-langchain-core-vulnerability branch from 39b413c to 768f9bc Compare February 11, 2026 13:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants