-
Notifications
You must be signed in to change notification settings - Fork 1
feat(hyperliquid): add methods for info endpoint #49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughAdds two Hyperliquid Info API latency metrics and registers them in the Hyperliquid read path; introduces a reusable Hyperliquid /info metric base; adjusts HTTP timing calculations and error formatting; comments out Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Scheduler as Scheduler
participant Handler as MetricsHandler
participant MetricBase as HyperliquidInfoMetricBase
participant InfoAPI as Hyperliquid /info API
Scheduler ->> Handler: trigger metric (e.g. HTTPClearinghouseStateLatencyMetric)
Handler ->> MetricBase: instantiate with state
Note right of MetricBase: extract user from state\nset `method` (e.g., "clearinghouseState")\nderive /info endpoint
MetricBase ->> InfoAPI: POST /info { type: method, user }
alt 2xx response
InfoAPI -->> MetricBase: 200 OK + body
MetricBase ->> Handler: record latency
else timeout / error
InfoAPI --x MetricBase: error/timeout
MetricBase ->> Handler: report error
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
common/state/blockchain_fetcher.py (1)
126-133: Fix IndexError: guard against empty transactions before indexing
isinstance(transactions[0], dict)is evaluated before checking iftransactionsis non-empty, which will raise IndexError when there are no transactions.Apply this safer, clearer logic:
- tx_hash = ( - transactions[0].get("hash", "") - if isinstance(transactions[0], dict) - else transactions[0] - if transactions - else "" - ) + if transactions: + first = transactions[0] + tx_hash = first.get("hash", "") if isinstance(first, dict) else first + else: + tx_hash = ""
🧹 Nitpick comments (9)
common/http_timing.py (1)
22-39: Silence unused-argument lint in Trace callbacksThe callback parameters are required by aiohttp, but unused. Prefix with underscores to quiet linters.
- async def on_request_start(session, context, params): + async def on_request_start(_session, _context, _params): @@ - async def on_dns_resolvehost_start(session, context, params): + async def on_dns_resolvehost_start(_session, _context, _params): @@ - async def on_dns_resolvehost_end(session, context, params): + async def on_dns_resolvehost_end(_session, _context, _params): @@ - async def on_connection_create_start(session, context, params): + async def on_connection_create_start(_session, _context, _params): @@ - async def on_connection_create_end(session, context, params): + async def on_connection_create_end(_session, _context, _params): @@ - async def on_request_end(session, context, params): + async def on_request_end(_session, _context, _params):common/metric_types.py (3)
12-12: Import HttpTimingCollector for connection-time exclusionYou’ll need the collector to enable exclude-connection-time behavior in shared timing.
-from common.http_timing import measure_http_request_timing +from common.http_timing import HttpTimingCollector, measure_http_request_timing
16-16: Remove unused MAX_RETRIES constantThis constant is no longer used after refactor; keeping it risks confusion vs common.http_timing.MAX_RETRIES.
-MAX_RETRIES = 2
200-201: Guard against missing endpoint
endpoints.get_endpoint()is annotated as possibly None. Fail fast with a clear error if unset.- endpoint: str | None = self.config.endpoints.get_endpoint() + endpoint: str | None = self.config.endpoints.get_endpoint() + if not endpoint: + raise ValueError("HTTP endpoint is not configured")common/hyperliquid_info_base.py (2)
109-111: Remove unused variable or validate payload
response_datais assigned but not used. Either drop the assignment or validate the JSON shape.- response_data = await response.json() + await response.json() # Ensure response is valid JSON
66-78: Endpoint transformation: handle idempotently when already '/info'Optional: If an endpoint already ends with '/info', avoid appending '/info' again.
- if base_endpoint.endswith("/evm"): + if base_endpoint.endswith("/info"): + return base_endpoint + if base_endpoint.endswith("/evm"): return base_endpoint.replace("/evm", "/info")metrics/hyperliquid.py (1)
102-104: Re-introduce a topics filter to bound log volumeDropping topics can inflate payloads/rates and skew latency, especially on busy contracts.
Apply this diff in-place to restore a bounded filter:
- # "topics": [ - # " 0x7fcf532c15f0a6db0bd6d0e038bea71d30d808c7d98cb3bf7268a95bf5081b65" # Withdrawal event - # ], + "topics": [ + WITHDRAWAL_EVENT_SIG, # Withdrawal event + ],And add this module constant (outside the selected lines):
# At module top-level WITHDRAWAL_EVENT_SIG = "0x7fcf532c15f0a6db0bd6d0e038bea71d30d808c7d98cb3bf7268a95bf5081b65"metrics/hyperliquid_info.py (2)
16-20: Silence Ruff ARG004 and de-duplicate the hard-coded userRename the unused parameter and reuse a module constant to avoid duplication.
Apply this diff to the method:
- def get_params_from_state(state_data: dict[str, Any]) -> dict[str, str]: + def get_params_from_state(_state_data: dict[str, Any]) -> dict[str, str]: - return {"user": "0x31ca8395cf837de08b24da3f660e77761dfb974b"} + return {"user": MONITOR_USER_ADDRESS}Add this constant near the imports (outside the selected lines):
# At module top-level MONITOR_USER_ADDRESS = "0x31ca8395cf837de08b24da3f660e77761dfb974b"
30-33: Apply the same Ruff fix and constant reuse to openOrdersMirror the change to silence ARG004 and remove duplication.
- def get_params_from_state(state_data: dict[str, Any]) -> dict[str, str]: + def get_params_from_state(_state_data: dict[str, Any]) -> dict[str, str]: - return {"user": "0x31ca8395cf837de08b24da3f660e77761dfb974b"} + return {"user": MONITOR_USER_ADDRESS}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
api/read/hyperliquid.py(2 hunks)common/http_timing.py(1 hunks)common/hyperliquid_info_base.py(1 hunks)common/metric_types.py(6 hunks)common/state/blockchain_fetcher.py(1 hunks)metrics/hyperliquid.py(4 hunks)metrics/hyperliquid_info.py(1 hunks)metrics/solana_landing_rate.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
metrics/hyperliquid_info.py (2)
common/hyperliquid_info_base.py (3)
HyperliquidInfoMetricBase(14-118)method(23-25)get_params_from_state(62-64)metrics/hyperliquid.py (10)
method(14-15)method(33-34)method(51-52)method(68-69)method(81-82)get_params_from_state(18-26)get_params_from_state(42-44)get_params_from_state(55-61)get_params_from_state(72-74)get_params_from_state(90-106)
metrics/solana_landing_rate.py (1)
config/defaults.py (1)
MetricsServiceConfig(6-43)
common/metric_types.py (4)
common/http_timing.py (1)
measure_http_request_timing(62-103)common/hyperliquid_info_base.py (2)
process_data(116-118)fetch_data(79-114)common/base_metric.py (5)
process_data(53-54)update_metric_value(82-91)mark_success(93-95)mark_failure(97-102)handle_error(104-131)common/metric_config.py (1)
get_endpoint(32-34)
common/hyperliquid_info_base.py (5)
common/http_timing.py (1)
measure_http_request_timing(62-103)common/metric_config.py (4)
MetricConfig(37-50)MetricLabelKey(8-16)MetricLabels(65-111)update_label(90-96)common/metric_types.py (7)
HttpMetric(102-133)method(145-147)get_params_from_state(194-196)get_endpoint(109-111)fetch_data(106-107)fetch_data(198-233)process_data(235-237)common/metrics_handler.py (1)
MetricsHandler(19-138)metrics/hyperliquid_info.py (4)
method(12-14)method(26-28)get_params_from_state(17-19)get_params_from_state(31-33)
api/read/hyperliquid.py (1)
metrics/hyperliquid_info.py (2)
HTTPClearinghouseStateLatencyMetric(8-19)HTTPOpenOrdersLatencyMetric(22-33)
🪛 Ruff (0.13.1)
metrics/hyperliquid_info.py
17-17: Unused static method argument: state_data
(ARG004)
31-31: Unused static method argument: state_data
(ARG004)
common/metric_types.py
70-70: Avoid specifying long messages outside the exception class
(TRY003)
common/http_timing.py
22-22: Unused function argument: session
(ARG001)
22-22: Unused function argument: context
(ARG001)
22-22: Unused function argument: params
(ARG001)
25-25: Unused function argument: session
(ARG001)
25-25: Unused function argument: context
(ARG001)
25-25: Unused function argument: params
(ARG001)
28-28: Unused function argument: session
(ARG001)
28-28: Unused function argument: context
(ARG001)
28-28: Unused function argument: params
(ARG001)
31-31: Unused function argument: session
(ARG001)
31-31: Unused function argument: context
(ARG001)
31-31: Unused function argument: params
(ARG001)
34-34: Unused function argument: session
(ARG001)
34-34: Unused function argument: context
(ARG001)
34-34: Unused function argument: params
(ARG001)
37-37: Unused function argument: session
(ARG001)
37-37: Unused function argument: context
(ARG001)
37-37: Unused function argument: params
(ARG001)
68-68: Unused function argument: exclude_connection_time
(ARG001)
101-101: Avoid specifying long messages outside the exception class
(TRY003)
common/hyperliquid_info_base.py
38-38: Avoid specifying long messages outside the exception class
(TRY003)
57-57: Unused static method argument: state_data
(ARG004)
62-62: Unused static method argument: state_data
(ARG004)
109-109: Local variable response_data is assigned to but never used
Remove assignment to unused variable response_data
(F841)
🔇 Additional comments (10)
metrics/solana_landing_rate.py (1)
81-83: LGTM — formatting onlyNo behavior change; the multi-line call is fine.
common/http_timing.py (1)
91-97: Good 429 handling with wait-and-retryReleasing the response before sleeping and honoring Retry-After is correct.
Consider capping
Retry-Afterto a sane upper bound (e.g., 10s) to avoid long stalls from misconfigured servers. Want a patch for that?api/read/hyperliquid.py (2)
12-15: Wiring of new Info metrics looks correctThe imports are accurate and scoped to the two new /info latency metrics.
33-34: Adding Info metrics to the gated list is appropriate—confirm endpoint compatibilityLooks good. Please verify at runtime that the resolved base endpoint ends with “/evm” (so HyperliquidInfoMetricBase correctly transforms it to “/info”), or that the fallback concatenation path produces the expected URL for your envs.
metrics/hyperliquid.py (3)
1-5: Docstring update improves discoverabilityGood pointer to the Info metrics module.
25-26: Use of 'latest' tag for eth_call params is fineThis is a valid and typical choice for this probe.
57-61: Switch to 'latest' for balance queries—confirm intentionUsing “latest” is valid; just confirm you no longer need the previous “old_block” stability semantics for this metric.
metrics/hyperliquid_info.py (3)
1-5: Module scaffolding is cleanDocstring, typing, and base import are all appropriate.
8-15: Method name for clearinghouseState is correctMatches the payload’s “type”.
22-29: Method name for openOrders is correctConsistent with the base contract.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
common/http_timing.py (1)
75-118: Relying onsession._trace_configsis brittleLines 75-118 mutate
ClientSession._trace_configs, which is a private attribute; aiohttp does not guarantee its stability (see the 3.12.14 notes about internal parser changes), so a minor upgrade can break this helper. Prefer wiring theTraceConfigin via public APIs—e.g., have the caller create the session withtrace_configs=[collector.create_trace_config()]or accept an optional collector/trace config parameter—so we stay on supported surface area.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
common/http_timing.py(1 hunks)
🧰 Additional context used
🪛 Ruff (0.13.1)
common/http_timing.py
102-102: Avoid specifying long messages outside the exception class
(TRY003)
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (3)
common/http_timing.py (3)
75-76: Avoid mutating the private session._trace_configs; at least guard accessAccessing
_trace_configsis private API and can break on aiohttp changes. If you keep it, guard removal to avoid attribute errors. Longer-term, consider a one-off ClientSession for timing to avoid touching internals.- session._trace_configs.append(trace_config) + session._trace_configs.append(trace_config) # noqa: SLF001 # Private attribute; see note below @@ - if trace_config in session._trace_configs: - session._trace_configs.remove(trace_config) + trace_configs = getattr(session, "_trace_configs", None) + if trace_configs is not None and trace_config in trace_configs: + trace_configs.remove(trace_config)Follow-up:
- Verify our runtime environment pins an aiohttp version where
_trace_configsexists.- If you want, I can provide a minimal alternative using a temporary ClientSession with
trace_configs=[trace_config]just for this call.Also applies to: 121-122
97-101: Parse Retry-After robustly (seconds or HTTP-date) and clamp ≥ 0Current
int()cast fails on HTTP-date values and may go negative. Improve parsing and clamp to non-negative.- wait_time = int(response.headers.get("Retry-After", 3)) - await response.release() - await asyncio.sleep(wait_time) + raw_retry_after = response.headers.get("Retry-After") + wait_time = 3 + if raw_retry_after: + try: + wait_time = int(raw_retry_after) + except ValueError: + # Retry-After can be a HTTP-date + try: + retry_at = parsedate_to_datetime(raw_retry_after) + now = datetime.datetime.now(retry_at.tzinfo) + wait_time = int(max(0, (retry_at - now).total_seconds())) + except Exception: + wait_time = 3 + await response.release() + await asyncio.sleep(max(0, wait_time))Add these imports at the top of the file:
import datetime from email.utils import parsedate_to_datetime
105-106: Prefer explicit None check for responseAvoid relying on truthiness of ClientResponse.
- if not response: + if response is None: raise ValueError("No response received")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
common/http_timing.py(1 hunks)
🧰 Additional context used
🪛 Ruff (0.13.1)
common/http_timing.py
86-86: Unexpected indentation
(invalid-syntax)
90-90: Expected except or finally after try block
(invalid-syntax)
90-90: Expected an identifier, but found a keyword 'if' that cannot be used here
(invalid-syntax)
90-90: Simple statements must be separated by newlines or semicolons
(invalid-syntax)
90-91: Expected an expression
(invalid-syntax)
91-91: Invalid assignment target
(invalid-syntax)
93-93: Expected an expression
(invalid-syntax)
94-94: Invalid annotated assignment target
(invalid-syntax)
94-94: Expected an identifier, but found a keyword 'else' that cannot be used here
(invalid-syntax)
94-95: Expected an expression
(invalid-syntax)
95-95: Unexpected indentation
(invalid-syntax)
105-105: unindent does not match any outer indentation level
(invalid-syntax)
109-109: unindent does not match any outer indentation level
(invalid-syntax)
111-111: Unexpected indentation
(invalid-syntax)
119-119: unindent does not match any outer indentation level
(invalid-syntax)
119-119: Expected a statement
(invalid-syntax)
119-119: Expected a statement
(invalid-syntax)
119-120: Expected a statement
(invalid-syntax)
121-121: Unexpected indentation
(invalid-syntax)
🔇 Additional comments (1)
common/http_timing.py (1)
117-117: Ensure callers close or release the returned responseThis function returns an open ClientResponse. Callers must read/release/close it to avoid connection leaks. Confirm all call sites do so (or switch this helper to read/ release internally and return just status/body/headers).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (3)
common/http_timing.py (3)
75-76: Avoid mutating private _trace_configs where possibleAppending to session._trace_configs is private API. If feasible, construct a short-lived ClientSession with trace_configs=[trace_config] for this measurement, or ensure strict scoping via trace_request_ctx (fix above) and keep usage contained.
110-118: Optional: add monotonic fallback if trace events are missingIf trace hooks don’t fire (e.g., due to pool reuse/edge cases), return a monotonic duration fallback.
Example minimal changes (illustrative):
- response = None + response = None + fallback_start = 0.0 + fallback_end = 0.0 @@ - # Send request + # Send request + fallback_start = time.monotonic() @@ - response = await session.request( + response = await session.request( method.upper(), url, trace_request_ctx=timing_collector, **request_kwargs, ) + fallback_end = time.monotonic() @@ - total_time: float = timing_collector.get_total_time() + total_time: float = timing_collector.get_total_time() or max(0.0, fallback_end - fallback_start)
86-89: Minor: avoid sending JSON body with GET by defaultSome servers treat GET bodies oddly. Consider limiting json payloads to methods that commonly carry bodies (POST/PUT/PATCH).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
common/http_timing.py(1 hunks)
🧰 Additional context used
🪛 Ruff (0.13.1)
common/http_timing.py
108-108: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (2)
common/http_timing.py (2)
22-33: Scope trace callbacks to this request via trace_request_ctxWithout scoping, concurrent requests on the same session can pollute timings. Guard each handler with the per-request ctx.
Apply this diff:
async def on_request_start(_session: Any, _context: Any, _params: Any) -> None: - self.timing["start"] = time.monotonic() + if getattr(_context, "trace_request_ctx", None) is not self: + return + self.timing["start"] = time.monotonic() @@ async def on_connection_create_start(_session: Any, _context: Any, _params: Any) -> None: - self.timing["conn_start"] = time.monotonic() + if getattr(_context, "trace_request_ctx", None) is not self: + return + self.timing["conn_start"] = time.monotonic() @@ async def on_connection_create_end(_session: Any, _context: Any, _params: Any) -> None: - self.timing["conn_end"] = time.monotonic() + if getattr(_context, "trace_request_ctx", None) is not self: + return + self.timing["conn_end"] = time.monotonic() @@ async def on_request_end(_session: Any, _context: Any, _params: Any) -> None: - self.timing["end"] = time.monotonic() + if getattr(_context, "trace_request_ctx", None) is not self: + return + self.timing["end"] = time.monotonic()
85-97: Use a single session.request with trace_request_ctx for all verbsUnify method dispatch and pass trace_request_ctx so the scoped callbacks above work.
Apply this diff:
# Prepare kwargs for any HTTP method, including optional JSON payload - request_kwargs: dict[str, Any] = {"headers": headers} + request_kwargs: dict[str, Any] = {} + if headers is not None: + request_kwargs["headers"] = headers if json_data is not None: request_kwargs["json"] = json_data - - if method.upper() == "POST": - response = await session.post( - url, headers=headers, json=json_data - ) - - else: - response = await session.request(method.upper(), url, **request_kwargs) + response = await session.request( + method.upper(), + url, + trace_request_ctx=timing_collector, + **request_kwargs, + )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (5)
common/http_timing.py (5)
127-135: Parse Retry-After HTTP-date and fractional seconds (avoid ValueError)
Currentint()cast will raise on HTTP-date or fractional values and loses precision.- if response.status == 429 and retry_count < MAX_RETRIES - 1: - wait_time = int( - response.headers.get("Retry-After", DEFAULT_RATE_LIMIT_WAIT) - ) - await response.release() - # Exponential backoff - await asyncio.sleep(wait_time * (2 ** retry_count)) - continue + if response.status == 429 and retry_count < MAX_RETRIES - 1: + retry_after = response.headers.get("Retry-After") + wait_time: float = float(DEFAULT_RATE_LIMIT_WAIT) + if retry_after: + ra = retry_after.strip() + if ra.isdigit(): + wait_time = float(ra) + else: + try: + from datetime import datetime, timezone + from email.utils import parsedate_to_datetime + dt = parsedate_to_datetime(ra) + if dt is not None: + now = datetime.now(timezone.utc) + wait_time = max(0.0, (dt - now).total_seconds()) + except Exception: + pass + await response.release() + # Exponential backoff + await asyncio.sleep(wait_time * (2 ** retry_count)) + continue
118-126: Build request kwargs defensively and support optional aiohttp timeout
Avoid passingheaders=None; allow caller to pass aClientTimeout.@@ -async def measure_http_request_timing( +async def measure_http_request_timing( session: aiohttp.ClientSession, method: str, url: str, headers: Optional[dict[str, str]] = None, json_data: Optional[dict[str, Any]] = None, exclude_connection_time: bool = True, + timeout: Optional[aiohttp.ClientTimeout] = None, ) -> tuple[float, aiohttp.ClientResponse]: @@ - request_kwargs: dict[str, Any] = {"headers": headers} - if json_data is not None: - request_kwargs["json"] = json_data + request_kwargs: dict[str, Any] = {} + if headers is not None: + request_kwargs["headers"] = headers + if json_data is not None: + request_kwargs["json"] = json_data + if timeout is not None: + request_kwargs["timeout"] = timeoutFollow-up: ensure call sites still type-check; the new arg is optional.
Also applies to: 80-87
105-109: Avoid mutating privatesession._trace_configs
This relies on aiohttp internals and may break across versions. Prefer creating the session withtrace_configs=[collector.create_trace_config()]or centralizing an instrumented session per service.Would you like a small factory/context to provision an instrumented ClientSession and reuse it across calls?
Also applies to: 158-162
10-12: Remove or use unused constant
DEFAULT_WEBSOCKET_TIMEOUTis unused; consider removing to appease linters.
205-211: Ruff TRY003: long messages in exceptions
If enforcing TRY003, shorten or rely on exception fields; otherwise ignore.- raise aiohttp.ClientResponseError( + raise aiohttp.ClientResponseError( request_info=response.request_info, history=(), status=response.status, - message=f"Status code: {response.status}", + message=None, headers=response.headers, ) @@ - raise ValueError(f"JSON-RPC error: {json_response['error']}") + raise ValueError("JSON-RPC error")Also applies to: 217-217
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
common/http_timing.py(1 hunks)
🧰 Additional context used
🪛 Ruff (0.13.1)
common/http_timing.py
150-150: Avoid specifying long messages outside the exception class
(TRY003)
217-217: Avoid specifying long messages outside the exception class
(TRY003)
common/http_timing.py
Outdated
| if not response: | ||
| if last_exception: | ||
| raise last_exception | ||
| raise ValueError("No response received after retries") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix boolean checks for ClientResponse to avoid misclassifying errors and leaking connections
ClientResponse can be falsy for non-2xx; use identity checks.
- if not response:
+ if response is None:
if last_exception:
raise last_exception
raise ValueError("No response received after retries")
@@
- if response and not response.closed:
+ if response is not None and not response.closed:
await response.release()Also applies to: 221-223
🧰 Tools
🪛 Ruff (0.13.1)
150-150: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
In common/http_timing.py around lines 147-151 (and similarly at 221-223),
replace the falsy check "if not response" with an explicit identity check "if
response is None" so ClientResponse objects with non-2xx statuses are not
treated as missing; additionally ensure any non-None response that represents an
error is properly closed/released (e.g., call response.release() or await
response.read()/response.text() per async context) before raising to avoid
leaking connections.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (4)
common/http_timing.py (4)
120-123: Nit: avoid passing headers=NoneOnly include headers when provided.
- request_kwargs: dict[str, Any] = {"headers": headers} + request_kwargs: dict[str, Any] = {} + if headers is not None: + request_kwargs["headers"] = headers if json_data is not None: request_kwargs["json"] = json_data
200-206: Prefer response.raise_for_status() and avoid long messages (TRY003)Simplify and align with aiohttp idioms.
- raise aiohttp.ClientResponseError( - request_info=response.request_info, - history=(), - status=response.status, - message=f"Status code: {response.status}", - headers=response.headers, - ) + response.raise_for_status()
212-212: Avoid long exception messages (TRY003) and preserve JSON-RPC error detailsConsider raising a structured error instead of embedding long text in ValueError.
- raise ValueError(f"JSON-RPC error: {json_response['error']}") + err = json_response["error"] + raise ValueError({"jsonrpc_error": err})Alternatively, introduce a dedicated JSONRPCError(Exception) type with code/message fields.
12-12: Unused constant
DEFAULT_WEBSOCKET_TIMEOUTis defined but unused in this module. Remove unless it’s part of the public API imported elsewhere.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
common/http_timing.py(1 hunks)
🧰 Additional context used
🪛 Ruff (0.13.1)
common/http_timing.py
150-150: Avoid specifying long messages outside the exception class
(TRY003)
212-212: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (3)
common/http_timing.py (3)
128-135: Handle Retry-After (seconds and HTTP-date) and fractional secondsCurrent int() cast can raise/truncate and ignores HTTP-date format.
- if response.status == 429 and retry_count < MAX_RETRIES - 1: - wait_time = int( - response.headers.get("Retry-After", DEFAULT_RATE_LIMIT_WAIT) - ) - await response.release() - # Exponential backoff - await asyncio.sleep(wait_time * (2 ** retry_count)) - continue + if response.status == 429 and retry_count < MAX_RETRIES - 1: + retry_after = response.headers.get("Retry-After") + wait_time: float = float(DEFAULT_RATE_LIMIT_WAIT) + if retry_after: + ra = retry_after.strip() + if ra.isdigit(): + wait_time = float(ra) + else: + try: + from datetime import datetime, timezone + from email.utils import parsedate_to_datetime + dt = parsedate_to_datetime(ra) + if dt is not None: + now = datetime.now(timezone.utc) + wait_time = max(0.0, (dt - now).total_seconds()) + except Exception: + pass + await response.release() + # Exponential backoff + await asyncio.sleep(wait_time * (2 ** retry_count)) + continue
27-45: Scope TraceConfig to this request and pass trace_request_ctx to avoid timing pollutionGuard callbacks and pass
trace_request_ctx=timing_collectorwith the request.async def on_request_start( _session: Any, _context: Any, _params: Any ) -> None: - self.timing["start"] = time.monotonic() + if _context is not self: + return + self.timing["start"] = time.monotonic() @@ async def on_connection_create_start( _session: Any, _context: Any, _params: Any ) -> None: - self.timing["conn_start"] = time.monotonic() + if _context is not self: + return + self.timing["conn_start"] = time.monotonic() @@ async def on_connection_create_end( _session: Any, _context: Any, _params: Any ) -> None: - self.timing["conn_end"] = time.monotonic() + if _context is not self: + return + self.timing["conn_end"] = time.monotonic() @@ async def on_request_end( _session: Any, _context: Any, _params: Any ) -> None: - self.timing["end"] = time.monotonic() + if _context is not self: + return + self.timing["end"] = time.monotonic() @@ - response = await isolated_session.request(method.upper(), url, **request_kwargs) + response = await isolated_session.request( + method.upper(), url, trace_request_ctx=timing_collector, **request_kwargs + )Also applies to: 125-125
147-151: Use identity checks for aiohttp.ClientResponse and avoid falsy testsAvoid misclassifying non-2xx responses and ensure proper cleanup checks.
- if not response: + if response is None: if last_exception: raise last_exception raise ValueError("No response received after retries") @@ - if response and not response.closed: + if response is not None and not response.closed: await response.release()Also applies to: 216-217
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
common/metric_types.py (1)
198-260: Fix latency undercount on retries: subtract connection time per-attempt (not globally).
conn_timeis captured once for the session and subtracted from the final attempt’sresponse_time. If the first attempt opens the connection (429) and the second uses keep-alive, you’ll subtract the first attempt’s connection time from the second attempt’s response time, underreporting latency (can even go negative).Compute connection time per attempt via
trace_request_ctx, carry the final attempt’s ctx out of the loop, and clamp at zero. Also, validate endpoint early to avoid passing None tosession.post.Apply this diff:
@@ - async def fetch_data(self) -> float: - """Measure single request latency with detailed timing.""" - endpoint: str | None = self.config.endpoints.get_endpoint() + async def fetch_data(self) -> float: + """Measure single request latency with detailed timing.""" + endpoint = self.config.endpoints.get_endpoint() + if not endpoint: + raise ValueError("No endpoint configured for HttpCallLatencyMetricBase") @@ - # Add trace config for detailed timing - trace_config = aiohttp.TraceConfig() - timing = {} + # Add trace config for detailed timing (per-attempt via trace_request_ctx) + trace_config = aiohttp.TraceConfig() @@ - async def on_connection_create_start(session, context, params): - timing["conn_start"] = time.monotonic() + async def on_connection_create_start(session, trace_config_ctx, params): + ctx = getattr(trace_config_ctx, "trace_request_ctx", None) + if isinstance(ctx, dict): + ctx["conn_start"] = time.monotonic() @@ - async def on_connection_create_end(session, context, params): - timing["conn_end"] = time.monotonic() + async def on_connection_create_end(session, trace_config_ctx, params): + ctx = getattr(trace_config_ctx, "trace_request_ctx", None) + if isinstance(ctx, dict): + ctx["conn_end"] = time.monotonic() @@ - response_time = 0.0 - response = None + response_time = 0.0 + response = None + last_ctx: dict[str, Any] | None = None @@ - for retry_count in range(MAX_RETRIES): - start_time: float = time.monotonic() - response: aiohttp.ClientResponse = await self._send_request( - session, endpoint - ) - response_time: float = time.monotonic() - start_time + for retry_count in range(MAX_RETRIES): + attempt_ctx: dict[str, Any] = {} + start_time: float = time.monotonic() + response = await self._send_request( + session, endpoint, trace_request_ctx=attempt_ctx + ) + response_time = time.monotonic() - start_time + last_ctx = attempt_ctx @@ - if "conn_start" in timing and "conn_end" in timing: - conn_time = timing["conn_end"] - timing["conn_start"] - else: - conn_time = 0 + conn_time = 0.0 + if last_ctx and "conn_start" in last_ctx and "conn_end" in last_ctx: + conn_time = max(last_ctx["conn_end"] - last_ctx["conn_start"], 0.0) @@ - # Return RPC time only (exclude connection time) - return response_time - conn_time + # Return RPC time only (exclude connection time) + return max(response_time - conn_time, 0.0)To support
trace_request_ctx, update_send_requestto forward kwargs:# Outside the changed hunk: replace _send_request with a kwargs‑forwarding variant async def _send_request( self, session: aiohttp.ClientSession, endpoint: str, **kwargs: Any, ) -> aiohttp.ClientResponse: return await session.post( endpoint, headers={ "Accept": "application/json", "Content-Type": "application/json", }, json=self._base_request, **kwargs, )
🧹 Nitpick comments (5)
common/metric_types.py (2)
82-86: Trim timeout message to satisfy TRY003 and keep logs concise.Shorten the message or omit it; the error type plus outer context is enough.
- self.handle_error( - TimeoutError( - f"WebSocket metric collection exceeded {self.config.timeout}s timeout" - ) - ) + self.handle_error(TimeoutError("WebSocket metric collection timed out"))
126-130: Same for HTTP timeout: concise message.Align with the WebSocket case.
- self.handle_error( - TimeoutError( - f"Metric collection exceeded {self.config.timeout}s timeout" - ) - ) + self.handle_error(TimeoutError("HTTP metric collection timed out"))common/hyperliquid_info_base.py (3)
56-58: Silence Ruff ARG004: unused parameterRename the unused argument to underscore.
- def validate_state(state_data: dict[str, Any]) -> bool: + def validate_state(_: dict[str, Any]) -> bool:
61-63: Silence Ruff ARG004: unused parameterSame here; rename to underscore.
- def get_params_from_state(state_data: dict[str, Any]) -> dict[str, str]: + def get_params_from_state(_: dict[str, Any]) -> dict[str, str]:
77-89: Use throwaway name for unused parameter (Ruff ARG002)_endpoint is unused; rename to underscore-prefixed to satisfy the linter.
- async def _send_request( - self, session: aiohttp.ClientSession, endpoint: str - ) -> aiohttp.ClientResponse: + async def _send_request( + self, session: aiohttp.ClientSession, _endpoint: str + ) -> aiohttp.ClientResponse:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
common/hyperliquid_info_base.py(1 hunks)common/metric_types.py(5 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
common/metric_types.py (2)
metrics/solana_landing_rate.py (2)
process_data(187-188)fetch_data(150-185)common/base_metric.py (5)
process_data(53-54)update_metric_value(82-91)mark_success(93-95)mark_failure(97-102)handle_error(104-131)
common/hyperliquid_info_base.py (4)
common/metric_config.py (2)
MetricConfig(37-50)MetricLabels(65-111)common/metric_types.py (7)
HttpCallLatencyMetricBase(136-278)method(145-147)get_params_from_state(194-196)_build_base_request(177-186)validate_state(189-191)get_endpoint(109-111)_send_request(263-274)metrics/hyperliquid_info.py (4)
method(12-14)method(26-28)get_params_from_state(17-19)get_params_from_state(31-33)api/read/hyperliquid.py (1)
handler(41-42)
🪛 Ruff (0.13.1)
common/metric_types.py
70-70: Avoid specifying long messages outside the exception class
(TRY003)
common/hyperliquid_info_base.py
56-56: Unused static method argument: state_data
(ARG004)
61-61: Unused static method argument: state_data
(ARG004)
78-78: Unused method argument: endpoint
(ARG002)
🔇 Additional comments (3)
common/hyperliquid_info_base.py (3)
51-54: Request payload shape looks correct"type" + "user" matches Hyperliquid /info expectations.
36-39: Guard against missing 'user' to avoid KeyErrorAccessing params["user"] will crash if subclasses forget to provide it. Raise a clear error instead.
- params: dict[str, str] = self.get_params_from_state(state_data) - self.user_address: str = params["user"] + params: dict[str, str] = self.get_params_from_state(state_data) + try: + self.user_address = params["user"] + except KeyError as exc: + raise ValueError("Missing required 'user' in state parameters") from exc
65-76: Harden endpoint derivation; avoid “None/info”If no endpoint is configured, current code can produce "None/info". Also prefer using the raw endpoints config to detect None.
- def get_info_endpoint(self) -> str: + def get_info_endpoint(self) -> str: """Transform EVM endpoint to info endpoint.""" - base_endpoint = self.get_endpoint().rstrip("/") + raw = self.config.endpoints.get_endpoint() + if raw is None: + raise ValueError("No endpoint configured") + base_endpoint = str(raw).rstrip("/") if base_endpoint.endswith("/info"): return base_endpoint if base_endpoint.endswith("/evm"): base_endpoint = base_endpoint[:-4] return f"{base_endpoint}/info"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
common/metric_types.py (2)
200-205: Guard against missing endpoint before issuing requestendpoint may be None; passing it to _send_request will fail. Raise a clear error early.
Apply this diff:
- endpoint: str | None = self.config.endpoints.get_endpoint() + endpoint = self.config.endpoints.get_endpoint() + if endpoint is None: + raise ValueError("No endpoint configured for HTTP metric")
221-260: Fix connection-time subtraction across retries and prevent negative latenciesconn_time is measured once (first connection) but subtracted from the final attempt, skewing results after 429 retries and can produce negatives. Subtract only for first attempt and clamp at zero.
Apply this diff:
- for retry_count in range(MAX_RETRIES): + last_attempt = 0 + for retry_count in range(MAX_RETRIES): start_time: float = time.monotonic() response: aiohttp.ClientResponse = await self._send_request( session, endpoint ) response_time: float = time.monotonic() - start_time if response.status == 429 and retry_count < MAX_RETRIES - 1: wait_time = int(response.headers.get("Retry-After", 3)) await response.release() await asyncio.sleep(wait_time) continue - - break + last_attempt = retry_count + break @@ - if "conn_start" in timing and "conn_end" in timing: - conn_time = timing["conn_end"] - timing["conn_start"] - else: - conn_time = 0 + if "conn_start" in timing and "conn_end" in timing: + conn_time = timing["conn_end"] - timing["conn_start"] + else: + conn_time = 0.0 @@ - # Return RPC time only (exclude connection time) - return response_time - conn_time + # Return RPC time only (exclude connection time from first attempt) + effective_conn = conn_time if last_attempt == 0 else 0.0 + return max(response_time - effective_conn, 0.0)
🧹 Nitpick comments (3)
common/metric_types.py (2)
206-214: Silence unused callback arguments to satisfy lintersPrefix callback args with underscores to address ARG001 without changing behavior.
Apply this diff:
- async def on_connection_create_start(session, context, params) -> None: + async def on_connection_create_start(_session, _context, _params) -> None: timing["conn_start"] = time.monotonic() - async def on_connection_create_end(session, context, params) -> None: + async def on_connection_create_end(_session, _context, _params) -> None: timing["conn_end"] = time.monotonic()
215-218: Optional: set per-request timeout in aiohttpTo ensure graceful cancellation and avoid hanging sockets, pass aiohttp.ClientTimeout(total=self.config.timeout) to ClientSession or request.
Example:
- async with aiohttp.ClientSession( - trace_configs=[trace_config], - ) as session: + timeout = aiohttp.ClientTimeout(total=self.config.timeout) + async with aiohttp.ClientSession(trace_configs=[trace_config], timeout=timeout) as session: response_time = 0.0common/hyperliquid_info_base.py (1)
78-80: Rename unused parameter to underscore to satisfy lintersendpoint isn’t used; rename to _endpoint to avoid ARG002.
Apply this diff:
- async def _send_request( - self, session: aiohttp.ClientSession, endpoint: str - ) -> aiohttp.ClientResponse: + async def _send_request( + self, session: aiohttp.ClientSession, _endpoint: str + ) -> aiohttp.ClientResponse:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
api/support/update_state.py(1 hunks)common/hyperliquid_info_base.py(1 hunks)common/metric_types.py(6 hunks)common/state/blob_storage.py(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- api/support/update_state.py
🧰 Additional context used
🧬 Code graph analysis (2)
common/metric_types.py (3)
metrics/solana_landing_rate.py (2)
process_data(187-188)fetch_data(150-185)metrics/ethereum.py (1)
process_data(293-310)common/base_metric.py (5)
process_data(53-54)update_metric_value(82-91)mark_success(93-95)mark_failure(97-102)handle_error(104-131)
common/hyperliquid_info_base.py (3)
common/metric_config.py (2)
MetricConfig(37-50)MetricLabels(65-111)common/metric_types.py (7)
HttpCallLatencyMetricBase(136-278)method(145-147)get_params_from_state(194-196)_build_base_request(177-186)validate_state(189-191)get_endpoint(109-111)_send_request(263-274)metrics/hyperliquid_info.py (4)
method(12-14)method(26-28)get_params_from_state(17-19)get_params_from_state(31-33)
🪛 Ruff (0.13.1)
common/metric_types.py
70-70: Avoid specifying long messages outside the exception class
(TRY003)
206-206: Unused function argument: session
(ARG001)
206-206: Unused function argument: context
(ARG001)
206-206: Unused function argument: params
(ARG001)
209-209: Unused function argument: session
(ARG001)
209-209: Unused function argument: context
(ARG001)
209-209: Unused function argument: params
(ARG001)
common/hyperliquid_info_base.py
56-56: Unused static method argument: state_data
(ARG004)
61-61: Unused static method argument: state_data
(ARG004)
78-78: Unused method argument: endpoint
(ARG002)
🔇 Additional comments (5)
common/state/blob_storage.py (1)
25-25: Explicit constructor return type: LGTMClearer typing. No behavior change.
common/metric_types.py (2)
72-76: Timeout wrapping via asyncio.wait_for: LGTMUsing wait_for with config.timeout is correct.
126-130: Consistent timeout error propagation: LGTMConverts asyncio timeout into a clear error for handle_error.
common/hyperliquid_info_base.py (2)
37-39: Avoid KeyError when 'user' is missingAccessing params["user"] will raise an opaque KeyError; raise a clear ValueError instead.
Apply this diff:
- params: dict[str, str] = self.get_params_from_state(state_data) - self.user_address: str = params["user"] + params: dict[str, str] = self.get_params_from_state(state_data) + try: + self.user_address: str = params["user"] + except KeyError: + raise ValueError("Missing required 'user' in state parameters") from None
65-75: Harden endpoint derivation and avoid “None/info” or double “/info”Handle None endpoints and idempotently append /info.
Apply this diff:
- def get_info_endpoint(self) -> str: - """Transform EVM endpoint to info endpoint.""" - base_endpoint: str = self.get_endpoint().rstrip("/") - - if base_endpoint.endswith("/info"): - return base_endpoint - - if base_endpoint.endswith("/evm"): - base_endpoint = base_endpoint[:-4] - - return f"{base_endpoint}/info" + def get_info_endpoint(self) -> str: + """Transform EVM endpoint to info endpoint.""" + raw = self.config.endpoints.get_endpoint() + if raw is None: + raise ValueError("No endpoint configured") + base = str(raw).rstrip("/") + if base.endswith("/info"): + return base + if base.endswith("/evm"): + base = base[:-4] + return f"{base}/info"
Summary by CodeRabbit
New Features
Refactor
Style