-
Notifications
You must be signed in to change notification settings - Fork 580
fix: Enforce max queue length in transport #593
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -22,7 +22,7 @@ class BackgroundWorker(object): | |
| def __init__(self): | ||
| # type: () -> None | ||
| check_thread_support() | ||
| self._queue = queue.Queue(-1) # type: Queue[Any] | ||
| self._queue = queue.Queue(30) # type: Queue[Any] | ||
| self._lock = Lock() | ||
| self._thread = None # type: Optional[Thread] | ||
| self._thread_for_pid = None # type: Optional[int] | ||
|
|
@@ -86,10 +86,18 @@ def start(self): | |
|
|
||
| def kill(self): | ||
| # type: () -> None | ||
| """ | ||
| Kill worker thread. Returns immediately. Not useful for | ||
| waiting on shutdown for events, use `flush` for that. | ||
| """ | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In theory now I'll consider people should never call this directly, but instead use self.flush(timeout=timeout, callback=callback)
self.transport.kill()The first Since there are some "shoulds", how about we handle Thoughts?
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah I did the same thing as with the other put_nowait now |
||
| logger.debug("background worker got kill request") | ||
| with self._lock: | ||
| if self._thread: | ||
| self._queue.put_nowait(_TERMINATOR) | ||
| try: | ||
| self._queue.put_nowait(_TERMINATOR) | ||
| except queue.Full: | ||
| logger.debug("background worker queue full, kill failed") | ||
|
|
||
| self._thread = None | ||
| self._thread_for_pid = None | ||
|
|
||
|
|
@@ -114,7 +122,10 @@ def _wait_flush(self, timeout, callback): | |
| def submit(self, callback): | ||
| # type: (Callable[[], None]) -> None | ||
| self._ensure_thread() | ||
| self._queue.put_nowait(callback) | ||
| try: | ||
| self._queue.put_nowait(callback) | ||
| except queue.Full: | ||
| logger.debug("background worker queue full, dropping event") | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Tested by sending loads of events in a row. This seems to be working as intended. In a given run, out of 106 events created in a loop, the client dropped 76 and the server received 30. import os
from sentry_sdk import init, capture_message, flush
init(debug=True)
for i in range(int(os.environ["TEST_N"])):
capture_message("hello")
flush()Details |
||
|
|
||
| def _target(self): | ||
| # type: () -> None | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍