Skip to content

Conversation

@tysonthomas9
Copy link
Collaborator

@tysonthomas9 tysonthomas9 commented Nov 11, 2025

Summary by CodeRabbit

  • Bug Fixes

    • LiteLLM provider no longer requires an API key (like BrowserOperator), reducing spurious "API key not configured" errors.
    • Model resolution now falls back intelligently (mini → main/model; nano → mini → main/model) and updates context to reflect actual resolved models.
    • Error messages clarified for missing model fallbacks.
  • New Features

    • Schema extraction results can include optional metadata for downstream use.

@claude
Copy link

claude bot commented Nov 11, 2025

Claude finished @tysonthomas9's task —— View job


Code Review for PR #68: fix LiteLLM model issue ✅

Changes Summary:

This PR makes two key changes to ConfigurableAgentTool.ts:

  1. API Key Requirement Logic (Lines 463-466): Updates the requiresApiKey logic to exempt both litellm and browseroperator providers from requiring API keys
  2. Model Fallback Chain (Lines 502-513): Improves model resolution for USE_MINI and USE_NANO sentinels by adding fallback chains

Code Quality Assessment

✅ Positive Aspects:

  • Clear, descriptive comments explaining the logic
  • Consistent code style with the existing codebase
  • Proper error handling with informative error messages
  • Logical fallback chain implementation that improves robustness

⚠️ Areas for Consideration:

  1. Type Safety (ConfigurableAgentTool.ts:504, 510):

    modelName = callCtx.miniModel || callCtx.mainModel || callCtx.model || '';

    The fallback to empty string could potentially cause issues downstream. Consider using a more explicit approach:

    modelName = callCtx.miniModel ?? callCtx.mainModel ?? callCtx.model ?? null;
    if (!modelName) {
      throw new Error('...');
    }
  2. Comment Clarity (Line 464):
    The comment says "LiteLLM and BrowserOperator has optional API keys" but should be "LiteLLM and BrowserOperator have optional API keys" (grammar fix).

Security Review

✅ No Security Concerns:

  • API key handling logic is appropriate
  • No exposure of sensitive data
  • Proper validation of provider types
  • Error messages don't leak sensitive information

Performance Considerations

✅ No Performance Issues:

  • Changes are minimal and don't introduce performance overhead
  • Fallback chains are efficient (short-circuit evaluation)
  • No unnecessary computations or memory allocations

Potential Bugs

🔍 Areas to Watch:

  1. Provider Type Validation: The code assumes callCtx.provider is one of the expected values ('litellm', 'browseroperator', etc.). Consider adding validation or using an enum/type guard.

  2. Error Message Consistency: The error messages for missing models now mention fallback options, which is good, but ensure all calling code is prepared for these new fallback behaviors.

Test Coverage

⚠️ Test Coverage Considerations:

  • Missing Test Cases: The changes introduce new logical paths that should be tested:
    • LiteLLM provider without API key
    • Model fallback chains for USE_MINI and USE_NANO
    • Edge cases where all fallback models are undefined/null

Suggested Test Cases:

// Test LiteLLM without API key
test('should not require API key for litellm provider', () => {
  // Test implementation
});

// Test model fallback chain
test('should fallback from mini to main model when mini is undefined', () => {
  // Test implementation
});

test('should fallback through nano -> mini -> main model chain', () => {
  // Test implementation
});

Overall Assessment

✅ LGTM with Minor Suggestions

This is a solid fix that addresses the LiteLLM model issue. The changes are:

  • Functionally correct and solve the stated problem
  • Well-implemented with proper fallback logic
  • Backwards compatible with existing code
  • Appropriately scoped to the specific issue

The code quality is high and follows existing patterns. The main suggestions are minor improvements around type safety and adding test coverage for the new logic paths.

Recommendation: Approve with the minor grammar fix and consider adding the suggested test cases in a follow-up.


@coderabbitai
Copy link

coderabbitai bot commented Nov 11, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

API key requirement logic broadened to exempt LiteLLM and BrowserOperator. Mini/Nano model resolution updated with explicit fallback chains and propagation into call context; error messages and runner config now reflect those fallbacks.

Changes

Cohort / File(s) Summary
Agent model resolution & key logic
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts
Treats litellm and browseroperator as not requiring API keys. Resolves USE_MINI by falling back miniModel → mainModel → model; resolves USE_NANO by falling back nanoModel → miniModel → mainModel → model. Propagates resolved mini/nano into callCtx and includes them in runnerConfig. Error strings updated to reference fallback sources.
Tool API-key checks (multiple tools)
front_end/panels/ai_chat/tools/CombinedExtractionTool.ts, front_end/panels/ai_chat/tools/FullPageAccessibilityTreeToMarkdownTool.ts, front_end/panels/ai_chat/tools/HTMLToMarkdownTool.ts, front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts, front_end/panels/ai_chat/tools/Tools.ts
Updated provider API-key requirement logic: provider !== 'litellm' && provider !== 'browseroperator' now determines if an API key is required (i.e., litellm and browseroperator do not require a key). Control flow for early API-key error path adjusted accordingly.
Schema extraction result shape
front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts
Added optional metadata?: ExtractionMetadata to exported SchemaExtractionResult interface. Also updated API-key requirement condition to exempt litellm alongside browseroperator.

Sequence Diagram(s)

sequenceDiagram
    participant UI as ConfigurableAgentTool
    participant Resolver as Model Resolver
    participant Conf as Agent Config

    UI->>Resolver: Resolve model for USE_MINI
    Resolver->>Conf: check `miniModel`
    alt miniModel present
        Conf-->>Resolver: return `miniModel`
    else miniModel missing
        Resolver->>Conf: check `mainModel`
        alt mainModel present
            Conf-->>Resolver: return `mainModel`
        else
            Resolver->>Conf: check `model`
            alt model present
                Conf-->>Resolver: return `model`
            else
                Resolver-->>UI: throw "No viable Mini model" error
            end
        end
    end
    Resolver-->>UI: resolved mini/named model -> update callCtx.miniModel

    UI->>Resolver: Resolve model for USE_NANO
    Resolver->>Conf: check `nanoModel`
    alt nanoModel present
        Conf-->>Resolver: return `nanoModel`
    else
        Resolver->>Conf: check `miniModel` → `mainModel` → `model` in order
        alt fallback found
            Conf-->>Resolver: return fallback model
        else
            Resolver-->>UI: throw "No viable Nano model" error
        end
    end
    Resolver-->>UI: resolved nano -> update callCtx.nanoModel
    UI->>UI: include resolved models in runnerConfig
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Review focus:
    • Correctness and precedence of the mini/nano fallback chains in ConfigurableAgentTool.ts
    • Propagation of resolved mini/nano into callCtx and runnerConfig consumers
    • The interface addition metadata?: ExtractionMetadata in SchemaBasedExtractorTool.ts and its downstream uses
    • Consistency of provider name checks ('litellm' vs other provider identifiers) across tools

Possibly related PRs

Suggested reviewers

  • olesho

Poem

🐰
I hop the model lanes with glee,
Lite or Browser — no key for me,
Mini hides — Main lends a paw,
Nano tumbles through the draw,
A nibble, a fallback, onward we spree.

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'fix LiteLLM model issue' is vague and doesn't clearly describe the actual changes, which involve API key requirement logic updates across multiple tools and model fallback configuration. Consider a more descriptive title such as 'Add LiteLLM to providers exempt from API key requirement' or 'Update API key validation to exclude LiteLLM provider' to better reflect the scope of changes.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/litellm

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2af0cb8 and 69839dc.

📒 Files selected for processing (1)
  • front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (2)
front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
  • provider (1488-1554)
front_end/panels/ai_chat/core/Constants.ts (1)
  • MODEL_SENTINELS (85-88)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: claude-review
🔇 Additional comments (2)
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (2)

502-507: LGTM! Improved fallback logic for mini model.

The mini model resolution now gracefully falls back through miniModel → mainModel → model, with a clear error message indicating all available fallback sources. This improves robustness when specific model tiers aren't configured.


508-513: LGTM! Comprehensive fallback chain for nano model.

The nano model resolution implements a complete fallback hierarchy (nanoModel → miniModel → mainModel → model), ensuring the agent can gracefully degrade through all available model tiers. The error message appropriately references all fallback sources.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 501 to +505
if (this.config.modelName === MODEL_SENTINELS.USE_MINI) {
if (!callCtx.miniModel) {
throw new Error(`Mini model not provided in context for agent '${this.name}'. Ensure context includes miniModel.`);
// Fall back to main model if mini model is not configured
modelName = callCtx.miniModel || callCtx.mainModel || callCtx.model || '';
if (!modelName) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Propagate fallback model into call context

The new fallback for MODEL_SENTINELS.USE_MINI/USE_NANO now chooses callCtx.mainModel/callCtx.model when a mini or nano model is absent, but the fallback is only assigned to the local modelName. The subsequent runnerConfig still forwards the original callCtx.miniModel/callCtx.nanoModel values (which are undefined), so tools that rely on ctx.miniModel or ctx.nanoModel (e.g. FullPageAccessibilityTreeToMarkdownTool checks ctx.nanoModel and returns “Missing LLM context”) will still fail even though a fallback model is available. Before this change the agent aborted early with a clear error; now it proceeds until a tool accesses the missing field. Update the context to include the resolved fallback model so mini/nano-dependent tools can run.

Useful? React with 👍 / 👎.

@claude
Copy link

claude bot commented Nov 11, 2025

Claude finished @tysonthomas9's task —— View job


Code Review for PR #68: fix LiteLLM model issue

Review Checklist

  • Analyze code changes and implementation
  • Review code quality and best practices
  • Identify potential bugs or issues
  • Assess performance considerations
  • Check security concerns
  • Evaluate test coverage needs
  • Examine issues raised by other reviewers

🔍 Summary

This PR makes significant improvements to LiteLLM provider support and model fallback logic across 7 files. The changes are functionally sound and address real usability issues, but there are some important considerations for robustness and consistency.

Positive Aspects

1. Comprehensive API Key Logic Update

  • Consistently updates all 7 tool files to treat both litellm and browseroperator as optional API key providers
  • Fixes a real usability issue where LiteLLM users were incorrectly required to provide API keys

2. Robust Model Fallback Implementation

  • Adds intelligent fallback chains for USE_MINI and USE_NANO sentinels:
    • Mini: miniModel → mainModel → model
    • Nano: nanoModel → miniModel → mainModel → model
  • Includes clear error messages referencing all fallback sources

3. Context Propagation Fix (ConfigurableAgentTool.ts:534-540)

  • Addresses the critical issue raised by the chatgpt-codex-connector review
  • Updates callCtx.miniModel and callCtx.nanoModel with resolved fallback values
  • Ensures downstream tools receive the fallback model instead of undefined

⚠️ Issues & Recommendations

1. Type Safety Concerns (ConfigurableAgentTool.ts:504, 510)

// Current (risky - could result in empty string):
modelName = callCtx.miniModel || callCtx.mainModel || callCtx.model || '';

// Recommended (explicit null handling):
modelName = callCtx.miniModel ?? callCtx.mainModel ?? callCtx.model ?? null;
if (!modelName) {
  throw new Error('...');
}

2. Grammar Fix (ConfigurableAgentTool.ts:464)

- // LiteLLM and BrowserOperator has optional API keys
+ // LiteLLM and BrowserOperator have optional API keys

3. Potential Edge Cases

  • Provider Validation: No validation that callCtx.provider is a known provider type
  • Concurrent Access: Modifying callCtx in-place could cause issues if the same context is used concurrently
  • Fallback Edge Cases: What happens when all models in fallback chain are empty strings?

🔒 Security Review

✅ No Security Concerns:

  • API key handling logic is appropriate and secure
  • No exposure of sensitive data in error messages
  • Proper validation of provider types before API key checks

Performance Review

✅ No Performance Issues:

  • Changes are minimal with no performance overhead
  • Fallback chains use efficient short-circuit evaluation
  • Context updates are O(1) operations

🧪 Test Coverage Recommendations

Critical Missing Test Cases:

  1. LiteLLM API Key Exemption

    test('should not require API key for litellm provider', () => {
      // Test that litellm works without apiKey
    });
  2. Model Fallback Chains

    test('should fallback from mini to main when miniModel undefined', () => {
      // Test mini fallback logic
    });
    
    test('should fallback through complete nano chain', () => {
      // Test nano → mini → main → model fallback
    });
  3. Context Propagation

    test('should update callCtx with resolved fallback models', () => {
      // Test that downstream tools receive fallback values
    });
  4. Edge Cases

    test('should handle all-undefined model contexts gracefully', () => {
      // Test error handling when no models available
    });

🐛 Bug Analysis

Verified Issues from Other Reviewers:

  1. API Key Consistency - Fixed across all 7 files
  2. Context Propagation - Addressed with lines 534-540
  3. Grammar Error - Still present in line 464

Additional Potential Issues:

  • Empty String Fallback: The || '' fallback could mask configuration issues
  • Context Mutation: Modifying callCtx properties could have unintended side effects

📋 Recommendations

High Priority:

  1. Fix grammar error in comment (line 464)
  2. Replace || '' with null checks for better error handling
  3. Add unit tests for new fallback logic

Medium Priority:

  1. Consider immutable context updates instead of in-place modification
  2. Add provider type validation/enum
  3. Document the fallback behavior in agent configuration docs

Overall Assessment

Verdict: APPROVE with minor fixes recommended

This PR successfully addresses the LiteLLM model issue and adds valuable fallback functionality. The implementation is consistent across files and includes good error messaging. While there are some type safety and edge case considerations, the core functionality is sound and the changes are well-scoped to the problem domain.

The code quality is high, follows existing patterns, and maintains backward compatibility. The main suggestions are defensive programming improvements rather than functional issues.


Claude Code Analysis completed

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (5)
front_end/panels/ai_chat/tools/CombinedExtractionTool.ts (1)

90-92: Normalize provider string; consider centralizing requiresApiKey logic

Lowercasing avoids accidental mismatches; extracting a helper avoids copy/paste across tools.

-    // LiteLLM and BrowserOperator have optional API keys
-    const requiresApiKey = provider !== 'litellm' && provider !== 'browseroperator';
+    // LiteLLM and BrowserOperator have optional API keys
+    const p = String(provider || '').toLowerCase();
+    const requiresApiKey = p !== 'litellm' && p !== 'browseroperator';

Optionally add a shared util: providerRequiresApiKey(p?: string): boolean, and use it across all tools.

front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts (1)

116-118: Normalize provider; align with shared helper when introduced

Keeps behavior consistent with other tools.

-    // LiteLLM and BrowserOperator have optional API keys
-    const requiresApiKey = provider !== 'litellm' && provider !== 'browseroperator';
+    // LiteLLM and BrowserOperator have optional API keys
+    const p = String(provider || '').toLowerCase();
+    const requiresApiKey = p !== 'litellm' && p !== 'browseroperator';
front_end/panels/ai_chat/tools/HTMLToMarkdownTool.ts (1)

70-72: Normalize provider; optional: factor to shared util

Prevents case-sensitivity issues and reduces duplication once centralized.

-    // LiteLLM and BrowserOperator have optional API keys
-    const requiresApiKey = provider !== 'litellm' && provider !== 'browseroperator';
+    // LiteLLM and BrowserOperator have optional API keys
+    const p = String(provider || '').toLowerCase();
+    const requiresApiKey = p !== 'litellm' && p !== 'browseroperator';
front_end/panels/ai_chat/tools/FullPageAccessibilityTreeToMarkdownTool.ts (1)

58-60: Normalize provider; good that LLM context is validated before key gating

This file already validates ctx.provider/ctx.nanoModel first (nice). Normalize provider string for safety.

-    // LiteLLM and BrowserOperator have optional API keys
-    const requiresApiKey = provider !== 'litellm' && provider !== 'browseroperator';
+    // LiteLLM and BrowserOperator have optional API keys
+    const p = String(provider || '').toLowerCase();
+    const requiresApiKey = p !== 'litellm' && p !== 'browseroperator';
front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts (1)

111-113: Normalize provider and consider deferring key gating until first LLM call

Today this may emit “API key not configured” even when provider is missing or a path doesn’t use LLM. Normalize provider, and optionally move the check closer to the first LLM call.

-    // LiteLLM and BrowserOperator have optional API keys
-    const requiresApiKey = provider !== 'litellm' && provider !== 'browseroperator';
+    // LiteLLM and BrowserOperator have optional API keys
+    const p = String(provider || '').toLowerCase();
+    const requiresApiKey = p !== 'litellm' && p !== 'browseroperator';
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 69839dc and 76d53d0.

📒 Files selected for processing (7)
  • front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (3 hunks)
  • front_end/panels/ai_chat/tools/CombinedExtractionTool.ts (1 hunks)
  • front_end/panels/ai_chat/tools/FullPageAccessibilityTreeToMarkdownTool.ts (1 hunks)
  • front_end/panels/ai_chat/tools/HTMLToMarkdownTool.ts (1 hunks)
  • front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts (1 hunks)
  • front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts (1 hunks)
  • front_end/panels/ai_chat/tools/Tools.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (6)
front_end/panels/ai_chat/tools/CombinedExtractionTool.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
  • provider (1488-1554)
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (2)
front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
  • provider (1488-1554)
front_end/panels/ai_chat/core/Constants.ts (1)
  • MODEL_SENTINELS (85-88)
front_end/panels/ai_chat/tools/FullPageAccessibilityTreeToMarkdownTool.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
  • provider (1488-1554)
front_end/panels/ai_chat/tools/HTMLToMarkdownTool.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
  • provider (1488-1554)
front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
  • provider (1488-1554)
front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
  • provider (1488-1554)
🔇 Additional comments (1)
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (1)

532-540: Good fix: propagate resolved mini/nano into call context

Verified that downstream tools across multiple files (HTMLToMarkdownTool, FullPageAccessibilityTreeToMarkdownTool, SchemaBasedExtractorTool, StreamlinedSchemaExtractorTool, CritiqueTool, Tools) all check for ctx.nanoModel or ctx.miniModel before using them. This fix correctly unblocks those tools by ensuring the resolved fallback models are available in the call context.

Comment on lines +463 to +466
// Check if API key is required based on provider
// LiteLLM and BrowserOperator have optional API keys
// Other providers (OpenAI, Groq, OpenRouter) require API keys
const requiresApiKey = provider !== 'litellm' && provider !== 'browseroperator';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reorder provider validation before API‑key gating to avoid misleading errors

If provider is undefined, the current order can return “API key not configured” instead of the clearer “Provider not provided…”. Validate provider first, then compute requiresApiKey.

-    // Check if API key is required based on provider
-    // LiteLLM and BrowserOperator have optional API keys
-    // Other providers (OpenAI, Groq, OpenRouter) require API keys
-    const requiresApiKey = provider !== 'litellm' && provider !== 'browseroperator';
-
-    if (requiresApiKey && !apiKey) {
+    // Validate required context first
+    if (!callCtx.provider) {
+      throw new Error(`Provider not provided in context for agent '${this.name}'. Ensure context includes provider.`);
+    }
+
+    // Check if API key is required based on provider (normalize for safety)
+    const p = String(callCtx.provider).toLowerCase();
+    const requiresApiKey = p !== 'litellm' && p !== 'browseroperator';
+
+    if (requiresApiKey && !apiKey) {
       const errorResult = this.createErrorResult(`API key not configured for ${this.name}`, [], 'error');
       // Create minimal error session
       const errorSession: AgentSession = {
         agentName: this.name,

Also applies to: 541-545

🤖 Prompt for AI Agents
In front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts around
lines 463-466 (and also apply same change at 541-545): currently the code
computes requiresApiKey using the provider variable before validating that
provider is present, which can produce an “API key not configured” error when
provider is undefined; change the flow to validate that provider is provided
first (throw or return the clearer “Provider not provided” error) and only after
that compute requiresApiKey (e.g., determine provider-specific optional/required
status) so you never evaluate provider comparisons when provider is undefined.

Comment on lines +503 to 507
// Fall back to main model if mini model is not configured
modelName = callCtx.miniModel || callCtx.mainModel || callCtx.model || '';
if (!modelName) {
throw new Error(`Mini model not provided in context for agent '${this.name}'. Ensure context includes miniModel or mainModel.`);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Include ‘model’ in USE_MINI failure message to match actual fallback chain

Code falls back to callCtx.model, but the error mentions only miniModel/mainModel. Update message for accuracy.

-        throw new Error(`Mini model not provided in context for agent '${this.name}'. Ensure context includes miniModel or mainModel.`);
+        throw new Error(`Mini model not provided in context for agent '${this.name}'. Ensure context includes miniModel, mainModel, or model.`);
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts around
lines 503 to 507, the thrown error message only references miniModel and
mainModel while the fallback chain actually includes callCtx.model; update the
error string to include 'model' (or phrase like "miniModel, mainModel, or
model") so the message accurately reflects the fallback order, keeping the rest
of the logic unchanged.

Comment on lines +509 to 513
// Fall back through nano -> mini -> main model chain
modelName = callCtx.nanoModel || callCtx.miniModel || callCtx.mainModel || callCtx.model || '';
if (!modelName) {
throw new Error(`Nano model not provided in context for agent '${this.name}'. Ensure context includes nanoModel, miniModel, or mainModel.`);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Include ‘model’ in USE_NANO failure message to match actual fallback chain

The code falls back nano → mini → main → model; error omits model.

-        throw new Error(`Nano model not provided in context for agent '${this.name}'. Ensure context includes nanoModel, miniModel, or mainModel.`);
+        throw new Error(`Nano model not provided in context for agent '${this.name}'. Ensure context includes nanoModel, miniModel, mainModel, or model.`);
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts around
lines 509 to 513, the fallback chain used to determine modelName is nano -> mini
-> main -> model but the thrown Error message only mentions nanoModel,
miniModel, and mainModel; update the error text to include 'model' so it
reflects the actual fallback chain (e.g., "Ensure context includes nanoModel,
miniModel, mainModel, or model.") and keep the rest of the logic unchanged.

Comment on lines +2970 to 2973
// LiteLLM and BrowserOperator have optional API keys
const requiresApiKey = providerForAction !== 'litellm' && providerForAction !== 'browseroperator';

if (requiresApiKey && !apiKey) {return { error: 'API key not configured.' };}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Normalize provider; also update 404 confirmation path to honor LiteLLM/BrowserOperator exemption

First, normalize here:

-    // LiteLLM and BrowserOperator have optional API keys
-    const requiresApiKey = providerForAction !== 'litellm' && providerForAction !== 'browseroperator';
+    // LiteLLM and BrowserOperator have optional API keys
+    const p = String(providerForAction || '').toLowerCase();
+    const requiresApiKey = p !== 'litellm' && p !== 'browseroperator';

Additionally, confirmWith404LLM currently bails out if no API key even for litellm/browseroperator. Apply the same exemption there to enable 404 checks without a key:

// Inside confirmWith404LLM(...)
const apiKey = agentService.getApiKey();
const p = String(ctx?.provider || '').toLowerCase();
const requiresApiKey = p !== 'litellm' && p !== 'browseroperator';
if (requiresApiKey && !apiKey) {
  logger.warn('No API key available for 404 confirmation');
  return false;
}

@tysonthomas9 tysonthomas9 merged commit a67e120 into main Nov 11, 2025
7 checks passed
@tysonthomas9 tysonthomas9 deleted the fix/litellm branch December 1, 2025 06:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants