Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Repeated calls to patch_all() eventually seem to infinite loop #11297

Open
palfrey opened this issue Nov 5, 2024 · 7 comments
Open

Repeated calls to patch_all() eventually seem to infinite loop #11297

palfrey opened this issue Nov 5, 2024 · 7 comments
Assignees
Labels
Tracing Distributed Tracing

Comments

@palfrey
Copy link

palfrey commented Nov 5, 2024

In ddtrace 2.14.4, if you call ddtrace.patch_all() many times, nothing bad seems to happen. In 2.15.1, at some point, something seems to get stuck in an infinite loop. This occurred for us because part of the app setup in our FastAPI app did patch_all() and so pytest called it many times.

A workaround is something like the following

_PATCH_ALL_DONE = False

def setup_datadog() -> None:
    global _PATCH_ALL_DONE
    if _PATCH_ALL_DONE is True:
        return
    ddtrace.patch_all()
    _PATCH_ALL_DONE = True
@romainkomorndatadog
Copy link
Contributor

@palfrey , are you using our pytest integration (eg: pytest --ddtrace) and using the --ddtrace-patch-all flag?

@palfrey
Copy link
Author

palfrey commented Nov 19, 2024

@palfrey , are you using our pytest integration (eg: pytest --ddtrace) and using the --ddtrace-patch-all flag?

Nope. But having read https://ddtrace.readthedocs.io/en/stable/integrations.html#pytest I don't think that's what we want. We call patch_all in our app code and we want to test the code that does that, not call patch_all outside of our code.

@romainkomorndatadog
Copy link
Contributor

@palfrey , thanks for the update, I think what you're doing makes sense.

@mabdinur , I don't think this is a CI app issue. It seems more like an issue with ddtrace.patch_all() not being idempotent.

@romainkomorndatadog romainkomorndatadog added Tracing Distributed Tracing and removed CI App labels Nov 19, 2024
@mabdinur
Copy link
Contributor

@palfrey I wasn't able to reproduce this error using the following steps:

  1. Install ddtrace==2.15.1, pytest==8.3.3, fastapi==0.115.5
  2. Run the test below(ex: pytest test_file.py -k test_patch_all):
import ddtrace

def test_patch_all():
    for _ in range(10):
        ddtrace.patch_all()

        import fastapi

        app = fastapi.FastAPI()

Can you modify the test above to reproduce the error you are seeing in your testsuite?

@palfrey
Copy link
Author

palfrey commented Nov 19, 2024

for _ in range(10):

Up to about 20 seems fine. At 21, 3.7s runtime; 22, 7.2s; 23, 13.72s, and I presume something in the range of doubling as that goes on.

@mabdinur
Copy link
Contributor

mabdinur commented Nov 22, 2024

Investigation

Thanks for bringing this to our attention. Patching ASM Common Modules appears to be increasing the duration of ddtrace.patch_all(). This PR may have introduced the performance regression in v2.15: 4feddce#diff-19592480afa34d43276c07c13325236a0adb62ef044a88b48bffce01992e6689. It appears that we are applying monkey patching to BytesIO.read and StringIO.read every time patch_all is called. This could be adding exponential overhead to all operations that involve reading strings. At most, we should apply this patching once.

ddtrace patching for Exploit Prevention is enabled by default (even if IAST/ASM are disabled). I will speak with the team about this design decision.

Workaround

For now, if your are not using the IAST or ASM products in CI, you can safely disable patching for exploit prevention by setting DD_APPSEC_RASP_ENABLED=False.

cc: @gnufede

@mabdinur
Copy link
Contributor

Resolved by: #11522

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Tracing Distributed Tracing
Projects
None yet
Development

No branches or pull requests

3 participants