-
Notifications
You must be signed in to change notification settings - Fork 53
fix LiteLLM model issue #68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Claude finished @tysonthomas9's task —— View job Code Review for PR #68: fix LiteLLM model issue ✅Changes Summary:This PR makes two key changes to
Code Quality Assessment✅ Positive Aspects:
Security Review✅ No Security Concerns:
Performance Considerations✅ No Performance Issues:
Potential Bugs🔍 Areas to Watch:
Test Coverage
Suggested Test Cases: // Test LiteLLM without API key
test('should not require API key for litellm provider', () => {
// Test implementation
});
// Test model fallback chain
test('should fallback from mini to main model when mini is undefined', () => {
// Test implementation
});
test('should fallback through nano -> mini -> main model chain', () => {
// Test implementation
});Overall Assessment✅ LGTM with Minor Suggestions This is a solid fix that addresses the LiteLLM model issue. The changes are:
The code quality is high and follows existing patterns. The main suggestions are minor improvements around type safety and adding test coverage for the new logic paths. Recommendation: Approve with the minor grammar fix and consider adding the suggested test cases in a follow-up. |
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAPI key requirement logic broadened to exempt LiteLLM and BrowserOperator. Mini/Nano model resolution updated with explicit fallback chains and propagation into call context; error messages and runner config now reflect those fallbacks. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as ConfigurableAgentTool
participant Resolver as Model Resolver
participant Conf as Agent Config
UI->>Resolver: Resolve model for USE_MINI
Resolver->>Conf: check `miniModel`
alt miniModel present
Conf-->>Resolver: return `miniModel`
else miniModel missing
Resolver->>Conf: check `mainModel`
alt mainModel present
Conf-->>Resolver: return `mainModel`
else
Resolver->>Conf: check `model`
alt model present
Conf-->>Resolver: return `model`
else
Resolver-->>UI: throw "No viable Mini model" error
end
end
end
Resolver-->>UI: resolved mini/named model -> update callCtx.miniModel
UI->>Resolver: Resolve model for USE_NANO
Resolver->>Conf: check `nanoModel`
alt nanoModel present
Conf-->>Resolver: return `nanoModel`
else
Resolver->>Conf: check `miniModel` → `mainModel` → `model` in order
alt fallback found
Conf-->>Resolver: return fallback model
else
Resolver-->>UI: throw "No viable Nano model" error
end
end
Resolver-->>UI: resolved nano -> update callCtx.nanoModel
UI->>UI: include resolved models in runnerConfig
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (2)
front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
provider(1488-1554)front_end/panels/ai_chat/core/Constants.ts (1)
MODEL_SENTINELS(85-88)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: claude-review
🔇 Additional comments (2)
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (2)
502-507: LGTM! Improved fallback logic for mini model.The mini model resolution now gracefully falls back through
miniModel → mainModel → model, with a clear error message indicating all available fallback sources. This improves robustness when specific model tiers aren't configured.
508-513: LGTM! Comprehensive fallback chain for nano model.The nano model resolution implements a complete fallback hierarchy (
nanoModel → miniModel → mainModel → model), ensuring the agent can gracefully degrade through all available model tiers. The error message appropriately references all fallback sources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (this.config.modelName === MODEL_SENTINELS.USE_MINI) { | ||
| if (!callCtx.miniModel) { | ||
| throw new Error(`Mini model not provided in context for agent '${this.name}'. Ensure context includes miniModel.`); | ||
| // Fall back to main model if mini model is not configured | ||
| modelName = callCtx.miniModel || callCtx.mainModel || callCtx.model || ''; | ||
| if (!modelName) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Propagate fallback model into call context
The new fallback for MODEL_SENTINELS.USE_MINI/USE_NANO now chooses callCtx.mainModel/callCtx.model when a mini or nano model is absent, but the fallback is only assigned to the local modelName. The subsequent runnerConfig still forwards the original callCtx.miniModel/callCtx.nanoModel values (which are undefined), so tools that rely on ctx.miniModel or ctx.nanoModel (e.g. FullPageAccessibilityTreeToMarkdownTool checks ctx.nanoModel and returns “Missing LLM context”) will still fail even though a fallback model is available. Before this change the agent aborted early with a clear error; now it proceeds until a tool accesses the missing field. Update the context to include the resolved fallback model so mini/nano-dependent tools can run.
Useful? React with 👍 / 👎.
|
Claude finished @tysonthomas9's task —— View job Code Review for PR #68: fix LiteLLM model issueReview Checklist
🔍 SummaryThis PR makes significant improvements to LiteLLM provider support and model fallback logic across 7 files. The changes are functionally sound and address real usability issues, but there are some important considerations for robustness and consistency. ✅ Positive Aspects1. Comprehensive API Key Logic Update
2. Robust Model Fallback Implementation
3. Context Propagation Fix (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (5)
front_end/panels/ai_chat/tools/CombinedExtractionTool.ts (1)
90-92: Normalize provider string; consider centralizing requiresApiKey logicLowercasing avoids accidental mismatches; extracting a helper avoids copy/paste across tools.
- // LiteLLM and BrowserOperator have optional API keys - const requiresApiKey = provider !== 'litellm' && provider !== 'browseroperator'; + // LiteLLM and BrowserOperator have optional API keys + const p = String(provider || '').toLowerCase(); + const requiresApiKey = p !== 'litellm' && p !== 'browseroperator';Optionally add a shared util: providerRequiresApiKey(p?: string): boolean, and use it across all tools.
front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts (1)
116-118: Normalize provider; align with shared helper when introducedKeeps behavior consistent with other tools.
- // LiteLLM and BrowserOperator have optional API keys - const requiresApiKey = provider !== 'litellm' && provider !== 'browseroperator'; + // LiteLLM and BrowserOperator have optional API keys + const p = String(provider || '').toLowerCase(); + const requiresApiKey = p !== 'litellm' && p !== 'browseroperator';front_end/panels/ai_chat/tools/HTMLToMarkdownTool.ts (1)
70-72: Normalize provider; optional: factor to shared utilPrevents case-sensitivity issues and reduces duplication once centralized.
- // LiteLLM and BrowserOperator have optional API keys - const requiresApiKey = provider !== 'litellm' && provider !== 'browseroperator'; + // LiteLLM and BrowserOperator have optional API keys + const p = String(provider || '').toLowerCase(); + const requiresApiKey = p !== 'litellm' && p !== 'browseroperator';front_end/panels/ai_chat/tools/FullPageAccessibilityTreeToMarkdownTool.ts (1)
58-60: Normalize provider; good that LLM context is validated before key gatingThis file already validates ctx.provider/ctx.nanoModel first (nice). Normalize provider string for safety.
- // LiteLLM and BrowserOperator have optional API keys - const requiresApiKey = provider !== 'litellm' && provider !== 'browseroperator'; + // LiteLLM and BrowserOperator have optional API keys + const p = String(provider || '').toLowerCase(); + const requiresApiKey = p !== 'litellm' && p !== 'browseroperator';front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts (1)
111-113: Normalize provider and consider deferring key gating until first LLM callToday this may emit “API key not configured” even when provider is missing or a path doesn’t use LLM. Normalize provider, and optionally move the check closer to the first LLM call.
- // LiteLLM and BrowserOperator have optional API keys - const requiresApiKey = provider !== 'litellm' && provider !== 'browseroperator'; + // LiteLLM and BrowserOperator have optional API keys + const p = String(provider || '').toLowerCase(); + const requiresApiKey = p !== 'litellm' && p !== 'browseroperator';
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts(3 hunks)front_end/panels/ai_chat/tools/CombinedExtractionTool.ts(1 hunks)front_end/panels/ai_chat/tools/FullPageAccessibilityTreeToMarkdownTool.ts(1 hunks)front_end/panels/ai_chat/tools/HTMLToMarkdownTool.ts(1 hunks)front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts(1 hunks)front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts(1 hunks)front_end/panels/ai_chat/tools/Tools.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (6)
front_end/panels/ai_chat/tools/CombinedExtractionTool.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
provider(1488-1554)
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (2)
front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
provider(1488-1554)front_end/panels/ai_chat/core/Constants.ts (1)
MODEL_SENTINELS(85-88)
front_end/panels/ai_chat/tools/FullPageAccessibilityTreeToMarkdownTool.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
provider(1488-1554)
front_end/panels/ai_chat/tools/HTMLToMarkdownTool.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
provider(1488-1554)
front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
provider(1488-1554)
front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
provider(1488-1554)
🔇 Additional comments (1)
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (1)
532-540: Good fix: propagate resolved mini/nano into call contextVerified that downstream tools across multiple files (HTMLToMarkdownTool, FullPageAccessibilityTreeToMarkdownTool, SchemaBasedExtractorTool, StreamlinedSchemaExtractorTool, CritiqueTool, Tools) all check for
ctx.nanoModelorctx.miniModelbefore using them. This fix correctly unblocks those tools by ensuring the resolved fallback models are available in the call context.
| // Check if API key is required based on provider | ||
| // LiteLLM and BrowserOperator have optional API keys | ||
| // Other providers (OpenAI, Groq, OpenRouter) require API keys | ||
| const requiresApiKey = provider !== 'litellm' && provider !== 'browseroperator'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reorder provider validation before API‑key gating to avoid misleading errors
If provider is undefined, the current order can return “API key not configured” instead of the clearer “Provider not provided…”. Validate provider first, then compute requiresApiKey.
- // Check if API key is required based on provider
- // LiteLLM and BrowserOperator have optional API keys
- // Other providers (OpenAI, Groq, OpenRouter) require API keys
- const requiresApiKey = provider !== 'litellm' && provider !== 'browseroperator';
-
- if (requiresApiKey && !apiKey) {
+ // Validate required context first
+ if (!callCtx.provider) {
+ throw new Error(`Provider not provided in context for agent '${this.name}'. Ensure context includes provider.`);
+ }
+
+ // Check if API key is required based on provider (normalize for safety)
+ const p = String(callCtx.provider).toLowerCase();
+ const requiresApiKey = p !== 'litellm' && p !== 'browseroperator';
+
+ if (requiresApiKey && !apiKey) {
const errorResult = this.createErrorResult(`API key not configured for ${this.name}`, [], 'error');
// Create minimal error session
const errorSession: AgentSession = {
agentName: this.name,Also applies to: 541-545
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts around
lines 463-466 (and also apply same change at 541-545): currently the code
computes requiresApiKey using the provider variable before validating that
provider is present, which can produce an “API key not configured” error when
provider is undefined; change the flow to validate that provider is provided
first (throw or return the clearer “Provider not provided” error) and only after
that compute requiresApiKey (e.g., determine provider-specific optional/required
status) so you never evaluate provider comparisons when provider is undefined.
| // Fall back to main model if mini model is not configured | ||
| modelName = callCtx.miniModel || callCtx.mainModel || callCtx.model || ''; | ||
| if (!modelName) { | ||
| throw new Error(`Mini model not provided in context for agent '${this.name}'. Ensure context includes miniModel or mainModel.`); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Include ‘model’ in USE_MINI failure message to match actual fallback chain
Code falls back to callCtx.model, but the error mentions only miniModel/mainModel. Update message for accuracy.
- throw new Error(`Mini model not provided in context for agent '${this.name}'. Ensure context includes miniModel or mainModel.`);
+ throw new Error(`Mini model not provided in context for agent '${this.name}'. Ensure context includes miniModel, mainModel, or model.`);🤖 Prompt for AI Agents
In front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts around
lines 503 to 507, the thrown error message only references miniModel and
mainModel while the fallback chain actually includes callCtx.model; update the
error string to include 'model' (or phrase like "miniModel, mainModel, or
model") so the message accurately reflects the fallback order, keeping the rest
of the logic unchanged.
| // Fall back through nano -> mini -> main model chain | ||
| modelName = callCtx.nanoModel || callCtx.miniModel || callCtx.mainModel || callCtx.model || ''; | ||
| if (!modelName) { | ||
| throw new Error(`Nano model not provided in context for agent '${this.name}'. Ensure context includes nanoModel, miniModel, or mainModel.`); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Include ‘model’ in USE_NANO failure message to match actual fallback chain
The code falls back nano → mini → main → model; error omits model.
- throw new Error(`Nano model not provided in context for agent '${this.name}'. Ensure context includes nanoModel, miniModel, or mainModel.`);
+ throw new Error(`Nano model not provided in context for agent '${this.name}'. Ensure context includes nanoModel, miniModel, mainModel, or model.`);🤖 Prompt for AI Agents
In front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts around
lines 509 to 513, the fallback chain used to determine modelName is nano -> mini
-> main -> model but the thrown Error message only mentions nanoModel,
miniModel, and mainModel; update the error text to include 'model' so it
reflects the actual fallback chain (e.g., "Ensure context includes nanoModel,
miniModel, mainModel, or model.") and keep the rest of the logic unchanged.
| // LiteLLM and BrowserOperator have optional API keys | ||
| const requiresApiKey = providerForAction !== 'litellm' && providerForAction !== 'browseroperator'; | ||
|
|
||
| if (requiresApiKey && !apiKey) {return { error: 'API key not configured.' };} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Normalize provider; also update 404 confirmation path to honor LiteLLM/BrowserOperator exemption
First, normalize here:
- // LiteLLM and BrowserOperator have optional API keys
- const requiresApiKey = providerForAction !== 'litellm' && providerForAction !== 'browseroperator';
+ // LiteLLM and BrowserOperator have optional API keys
+ const p = String(providerForAction || '').toLowerCase();
+ const requiresApiKey = p !== 'litellm' && p !== 'browseroperator';Additionally, confirmWith404LLM currently bails out if no API key even for litellm/browseroperator. Apply the same exemption there to enable 404 checks without a key:
// Inside confirmWith404LLM(...)
const apiKey = agentService.getApiKey();
const p = String(ctx?.provider || '').toLowerCase();
const requiresApiKey = p !== 'litellm' && p !== 'browseroperator';
if (requiresApiKey && !apiKey) {
logger.warn('No API key available for 404 confirmation');
return false;
}
Summary by CodeRabbit
Bug Fixes
New Features