-
Notifications
You must be signed in to change notification settings - Fork 2.2k
feat(agents): Add on_llm_start and on_llm_end Lifecycle Hooks #987
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
When execute_tools_and_side_effects takes long, indeed these hooks would be helpful to understand the exact time spent by LLM. One quick thing I noticed is that probably this implementation doe not yet support streaming patterns. @rm-openai do you think having these two hooks is good to go? |
Thanks for the feedback—here’s a quick run‑through:
|
New: Streaming Support for
|
Thanks again for the valuable feedback, @seratch. |
Hi @seratch and @rm-openai , |
This PR is stale because it has been open for 10 days with no activity. |
This PR is still relevant and ready for review. @seratch @rm-openai — would appreciate a quick look or approval when you have a moment. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me, these changes are safe and this is a good enhancement for certain use cases.
looks like it has conflicts |
@uzair330 Can you resolve the conflicts? then we'll merge this PR! |
1a54b5b
to
a26fd39
Compare
Hello @seratch and @rm-openai, Thanks for the reviews and approvals. I’ve rebased this branch onto the latest main, resolved all conflicts, and confirmed it’s up to date. CI jobs are currently pending and appear to require maintainer approval to run. The branch is ready for final verification — please trigger the workflows when convenient, or let me know if you’d like any further changes. |
Motivation
Currently, the
AgentHooks
provide valuable lifecycle events for the start/end of an agent run and for tool execution (on_tool_start
/on_tool_end
). However, developers lack the ability to observe the agent's execution at the language model level.This PR introduces two new hooks,
on_llm_start
andon_llm_end
, to provide this deeper level of observability. This change enables several key use cases:Summary of Changes
src/agents/lifecycle.py
Added two new async methods,
on_llm_start
andon_llm_end
, to theAgentHooks
base class, matching the existingon_*_start
/on_*_end
pattern.src/agents/run.py
Wrapped the call to
model.get_response(...)
in_get_new_response
with invocations of the new hooks so that they fire immediately before and after each LLM call.tests/test_agent_llm_hooks.py
Added unit tests (using a mock model and spy hooks) to validate:
on_start → on_llm_start → on_llm_end → on_end
in a chat‑only run.on_start → on_llm_start → on_llm_end → on_tool_start → on_tool_end → on_llm_start → on_llm_end → on_end
.agent.hooks
isNone
.Usage Examples
1. Async Example (awaitable via
run
)2. Sync Example (blocking via
run_sync
)Note
Checklist
ruff
).