-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Description
Auth/confirmation resume ignores skip_summarization on re-executed tool response
Problem
When an invocation is resumed after an auth (or tool confirmation) pause, the _AuthLlmRequestProcessor (or _RequestConfirmationLlmRequestProcessor) correctly re-executes the original tool during _preprocess_async and yields the resulting function_response_event. However, _run_one_step_async does not check whether preprocessing already produced a final response, so it unconditionally proceeds to make an LLM call. This means actions.skip_summarization = True on the tool's response event is effectively ignored — the LLM is called to summarize (or re-process) a response that was meant to be final.
Steps to reproduce
- Configure an agent with a tool that sets
tool_context.actions.skip_summarization = Trueon its response - The tool requires OAuth or other credential-based auth via ADK's auth flow
- On first invocation, the tool triggers an auth request → the invocation pauses with an
adk_request_credentialevent - The client responds with credentials →
run_asyncis called with the originalinvocation_idto resume - The auth preprocessor re-executes the tool successfully and yields the function response with
skip_summarization = True - Expected: The invocation ends with the tool's response as the final event, no LLM call
- Actual: An LLM call is made after the tool response, generating an unwanted summary/follow-up
Root cause
In base_llm_flow.py, _run_one_step_async has this structure:
# 1. Preprocess (auth preprocessor yields function_response_event here)
async with Aclosing(self._preprocess_async(invocation_context, llm_request)) as agen:
async for event in agen:
yield event
if invocation_context.end_invocation:
return
# 2. Resume checks (neither matches after auth flow)
events = invocation_context._get_events(current_invocation=True, current_branch=True)
if invocation_context.is_resumable and events and events[-1].get_function_calls():
# Not taken — events[-1] is the function_response from the auth preprocessor,
# which has function RESPONSES, not function CALLS
...
# 3. Falls through to LLM call
model_response_event = Event(...)
async with Aclosing(self._call_llm_async(...)):
...After the auth preprocessor yields the tool's response during step 1, execution continues. The resumed-function-call detection at step 2 checks events[-1].get_function_calls(), but events[-1] is now the just-appended function response event (which contains function responses, not calls), so the condition is False. The code falls through to the LLM call at step 3.
The skip_summarization flag is only checked by is_final_response() in the outer run_async while loop, but by that point the LLM has already been called and its text response has become the last_event, overwriting the tool's response.
The same issue applies to _RequestConfirmationLlmRequestProcessor, which follows the same pattern: yield a function response during preprocessing → _run_one_step_async ignores it and calls the LLM.
Proposed fix
Track whether _preprocess_async yielded a final response event, and return early if so:
async def _run_one_step_async(self, invocation_context):
llm_request = LlmRequest()
preprocess_yielded_final = False
async with Aclosing(
self._preprocess_async(invocation_context, llm_request)
) as agen:
async for event in agen:
yield event
if event.is_final_response():
preprocess_yielded_final = True
if invocation_context.end_invocation or preprocess_yielded_final:
return
# ... rest of method unchangedThis is safe for normal (non-resume) flow because no standard preprocessor yields events with is_final_response() == True. The only preprocessors that yield events at all are _AuthLlmRequestProcessor and _RequestConfirmationLlmRequestProcessor, and only during their respective resume flows.
Affected file
src/google/adk/flows/llm_flows/base_llm_flow.py — _run_one_step_async method