-
Notifications
You must be signed in to change notification settings - Fork 760
fix: add reasoning config only if is reasoning model #5221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
mscolnick
merged 2 commits into
marimo-team:main
from
bjoaquinc:update-reasoning-for-openai-anthropic
Jun 9, 2025
Merged
fix: add reasoning config only if is reasoning model #5221
mscolnick
merged 2 commits into
marimo-team:main
from
bjoaquinc:update-reasoning-for-openai-anthropic
Jun 9, 2025
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
…ic and openai to fix the no-return-any test fail
mscolnick
approved these changes
Jun 9, 2025
|
🚀 Development release published. You may be able to view the changes at https://marimo.app?v=0.13.16-dev57 |
Light2Dark
pushed a commit
that referenced
this pull request
Jun 9, 2025
## 📝 Summary <!-- Provide a concise summary of what this pull request is addressing. If this PR fixes any issues, list them here by number (e.g., Fixes #123). --> Updated openai and anthropic to only include the reasoning config if using a reasoning model, because it throws errors with non-reasoning models. ## 🔍 Description of Changes <!-- Detail the specific changes made in this pull request. Explain the problem addressed and how it was resolved. If applicable, provide before and after comparisons, screenshots, or any relevant details to help reviewers understand the changes easily. --> - Added reasoning_effort, DEFAULT_REASONING_EFFORT, and is_reasoning_model for openai - Decoupled create_params from the .create() method of openai and anthropic - Only added in reasoning or thinking related config if model is reasoning or thinking, otherwise dont add it at all (I tried just passing in None or other null types but it kept throwing errors for non-reasoning models) _Note: Openai does not send back reasoning tokens for streaming but adding medium to the reasoning_effort does improve the output_ ## 📋 Checklist - [x] I have read the [contributor guidelines](https://github.com/marimo-team/marimo/blob/main/CONTRIBUTING.md). - [ ] For large changes, or changes that affect the public API: this change was discussed or approved through an issue, on [Discord](https://marimo.io/discord?ref=pr), or the community [discussions](https://github.com/marimo-team/marimo/discussions) (Please provide a link if applicable). - [ ] I have added tests for the changes made. - [x] I have run the code and verified that it works as expected. ## 📜 Reviewers <!-- Tag potential reviewers from the community or maintainers who might be interested in reviewing this pull request. Your PR will be reviewed more quickly if you can figure out the right person to tag with @ --> @mscolnick
sebbeutler
pushed a commit
to sebbeutler/marimo
that referenced
this pull request
Jun 28, 2025
## 📝 Summary <!-- Provide a concise summary of what this pull request is addressing. If this PR fixes any issues, list them here by number (e.g., Fixes marimo-team#123). --> Updated openai and anthropic to only include the reasoning config if using a reasoning model, because it throws errors with non-reasoning models. ## 🔍 Description of Changes <!-- Detail the specific changes made in this pull request. Explain the problem addressed and how it was resolved. If applicable, provide before and after comparisons, screenshots, or any relevant details to help reviewers understand the changes easily. --> - Added reasoning_effort, DEFAULT_REASONING_EFFORT, and is_reasoning_model for openai - Decoupled create_params from the .create() method of openai and anthropic - Only added in reasoning or thinking related config if model is reasoning or thinking, otherwise dont add it at all (I tried just passing in None or other null types but it kept throwing errors for non-reasoning models) _Note: Openai does not send back reasoning tokens for streaming but adding medium to the reasoning_effort does improve the output_ ## 📋 Checklist - [x] I have read the [contributor guidelines](https://github.com/marimo-team/marimo/blob/main/CONTRIBUTING.md). - [ ] For large changes, or changes that affect the public API: this change was discussed or approved through an issue, on [Discord](https://marimo.io/discord?ref=pr), or the community [discussions](https://github.com/marimo-team/marimo/discussions) (Please provide a link if applicable). - [ ] I have added tests for the changes made. - [x] I have run the code and verified that it works as expected. ## 📜 Reviewers <!-- Tag potential reviewers from the community or maintainers who might be interested in reviewing this pull request. Your PR will be reviewed more quickly if you can figure out the right person to tag with @ --> @mscolnick
sebbeutler
pushed a commit
to sebbeutler/marimo
that referenced
this pull request
Jul 7, 2025
## 📝 Summary <!-- Provide a concise summary of what this pull request is addressing. If this PR fixes any issues, list them here by number (e.g., Fixes marimo-team#123). --> Updated openai and anthropic to only include the reasoning config if using a reasoning model, because it throws errors with non-reasoning models. ## 🔍 Description of Changes <!-- Detail the specific changes made in this pull request. Explain the problem addressed and how it was resolved. If applicable, provide before and after comparisons, screenshots, or any relevant details to help reviewers understand the changes easily. --> - Added reasoning_effort, DEFAULT_REASONING_EFFORT, and is_reasoning_model for openai - Decoupled create_params from the .create() method of openai and anthropic - Only added in reasoning or thinking related config if model is reasoning or thinking, otherwise dont add it at all (I tried just passing in None or other null types but it kept throwing errors for non-reasoning models) _Note: Openai does not send back reasoning tokens for streaming but adding medium to the reasoning_effort does improve the output_ ## 📋 Checklist - [x] I have read the [contributor guidelines](https://github.com/marimo-team/marimo/blob/main/CONTRIBUTING.md). - [ ] For large changes, or changes that affect the public API: this change was discussed or approved through an issue, on [Discord](https://marimo.io/discord?ref=pr), or the community [discussions](https://github.com/marimo-team/marimo/discussions) (Please provide a link if applicable). - [ ] I have added tests for the changes made. - [x] I have run the code and verified that it works as expected. ## 📜 Reviewers <!-- Tag potential reviewers from the community or maintainers who might be interested in reviewing this pull request. Your PR will be reviewed more quickly if you can figure out the right person to tag with @ --> @mscolnick
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📝 Summary
Updated openai and anthropic to only include the reasoning config if using a reasoning model, because it throws errors with non-reasoning models.
🔍 Description of Changes
Note: Openai does not send back reasoning tokens for streaming but adding medium to the reasoning_effort does improve the output
📋 Checklist
📜 Reviewers
@mscolnick