-
Notifications
You must be signed in to change notification settings - Fork 283
feat: add qwen 3 32B, remove deprecated model, fix reasoning #446
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Warning Rate limit exceeded@leonardmq has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 4 minutes and 9 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (3)
WalkthroughThe changes update the model provider configuration to support the new "none" thinking level, reorganize Groq provider entries for Qwen models, and explicitly allow the "reasoning_effort" parameter in LiteLLM adapter completion kwargs to prevent it from being dropped. Changes
Suggested reviewers
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
libs/core/kiln_ai/adapters/model_adapters/litellm_adapter.py (1)
372-379
:allowed_openai_params
list is hard-coded and may silently discard caller-supplied values
allowed_openai_params
is always overwritten with["reasoning_effort"]
, so any value injected throughLiteLlmConfig.additional_body_options
disappears. This is harmless today but makes the adapter brittle and surprises integrators who try to extend the list.- # This overrides the drop_params setting above for specific parameters that we know should not be dropped - # but litellm drops because it is not aware that the model supports them. - "allowed_openai_params": ["reasoning_effort"], + # Preserve any caller-supplied entries while ensuring we always keep + # `reasoning_effort`. + "allowed_openai_params": list( + set( + ["reasoning_effort"] + + self._additional_body_options.get("allowed_openai_params", []) + ) + ),(If
additional_body_options
is guaranteed not to contain the key, feel free to ignore.)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
libs/core/kiln_ai/adapters/ml_model_list.py
(3 hunks)libs/core/kiln_ai/adapters/model_adapters/litellm_adapter.py
(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: leonardmq
PR: Kiln-AI/Kiln#418
File: libs/core/kiln_ai/adapters/ml_model_list.py:0-0
Timestamp: 2025-07-16T09:37:39.792Z
Learning: The `glm_z1_rumination_32b_0414` model was intentionally removed from the built_in_models list due to output formatting issues: output was duplicated in both `output` and `reasoning` fields, and contained random internal JSON in the output. This model should not be re-added without addressing these formatting problems.
Learnt from: leonardmq
PR: Kiln-AI/Kiln#418
File: libs/core/kiln_ai/adapters/ml_model_list.py:0-0
Timestamp: 2025-07-16T09:37:39.792Z
Learning: The `glm_z1_rumination_32b_0414` model was intentionally removed from the built_in_models list due to output formatting issues: output was duplicated in both `output` and `reasoning` fields, and contained random internal JSON in the output. This model should not be re-added without addressing these formatting problems.
Learnt from: leonardmq
PR: Kiln-AI/Kiln#341
File: libs/server/kiln_server/document_api.py:44-51
Timestamp: 2025-06-18T08:22:58.510Z
Learning: leonardmq prefers to defer fixing blocking I/O in async handlers when: the operation is very fast (milliseconds), user-triggered rather than automated, has no concurrency concerns, and would require additional testing to fix properly. He acknowledges such issues as valid but makes pragmatic decisions about timing the fixes.
Learnt from: leonardmq
PR: Kiln-AI/Kiln#402
File: libs/core/kiln_ai/adapters/embedding/litellm_embedding_adapter.py:0-0
Timestamp: 2025-07-14T03:43:07.265Z
Learning: leonardmq prefers to keep defensive validation checks even when they're technically redundant, viewing them as useful "quick sanity checks" that provide additional safety nets. He values defensive programming over strict DRY (Don't Repeat Yourself) principles when the redundant code serves as a safeguard.
Learnt from: leonardmq
PR: Kiln-AI/Kiln#388
File: libs/core/kiln_ai/datamodel/test_extraction_chunk.py:49-74
Timestamp: 2025-07-03T05:13:02.873Z
Learning: leonardmq prefers automatic cleanup of temporary files in tests using `delete=True` in `tempfile.NamedTemporaryFile()` context managers, rather than manual cleanup with `delete=False` and explicit file removal. This is because automatic cleanup is simpler, safer, and avoids leaving temporary files if tests fail before cleanup.
Learnt from: leonardmq
PR: Kiln-AI/Kiln#0
File: :0-0
Timestamp: 2025-06-27T06:45:06.591Z
Learning: leonardmq prefers unused imports to be flagged as actionable review comments rather than nitpick comments. Unused imports should be treated as concrete issues that need to be addressed, not just suggestions.
libs/core/kiln_ai/adapters/ml_model_list.py (2)
Learnt from: leonardmq
PR: Kiln-AI/Kiln#418
File: libs/core/kiln_ai/adapters/ml_model_list.py:0-0
Timestamp: 2025-07-16T09:37:39.792Z
Learning: The `glm_z1_rumination_32b_0414` model was intentionally removed from the built_in_models list due to output formatting issues: output was duplicated in both `output` and `reasoning` fields, and contained random internal JSON in the output. This model should not be re-added without addressing these formatting problems.
Learnt from: leonardmq
PR: Kiln-AI/Kiln#418
File: libs/core/kiln_ai/adapters/ml_model_list.py:0-0
Timestamp: 2025-07-16T09:37:39.792Z
Learning: The `glm_z1_rumination_32b_0414` model was intentionally removed from the built_in_models list due to output formatting issues: output was duplicated in both `output` and `reasoning` fields, and contained random internal JSON in the output. This model should not be re-added without addressing these formatting problems.
🧬 Code Graph Analysis (1)
libs/core/kiln_ai/adapters/ml_model_list.py (2)
libs/core/kiln_ai/datamodel/datamodel_enums.py (2)
ModelProviderName
(80-99)StructuredOutputMode
(23-45)libs/core/kiln_ai/adapters/parsers/test_r1_parser.py (1)
parser
(8-9)
🔇 Additional comments (3)
libs/core/kiln_ai/adapters/ml_model_list.py (3)
201-201
: LGTM! Thinking level extension aligns with PR objectives.The addition of
"none"
to the thinking_level options correctly implements support for disabling thinking on Qwen3 models as mentioned in the PR description, aligning with Groq's reasoning options.
2398-2405
: LGTM! Groq provider configuration for Qwen 3 32B is well-structured.The new Groq provider configuration correctly implements support for the
Qwen/Qwen3-32B
model with appropriate settings for reasoning capabilities, including ther1_thinking
parser andjson_instructions
structured output mode.
2429-2435
: LGTM! Non-thinking Groq provider configuration properly implements thinking disabling.The configuration correctly uses the new
thinking_level="none"
option and appropriately differs from the thinking variant by usingjson_schema
structured output mode and omitting the parser and reasoning capabilities.
02c137d
to
514b6a1
Compare
What does this PR do?
Qwen/Qwen3-32B
on Groq, both non-thinking and thinking (ref: https://groq.com/pricing)qwen-qwq-32b
, which has been decommissioned on Groqnone
value forthinking_level
because this is how Groq supports disabling thinking for Qwen3 models (ref: https://console.groq.com/docs/reasoning#options-for-reasoning-effort)reasoning_effort
inallowed_openai_params
we pass on to LiteLLM to override thedrop_params=True
for this particular field, because it drops it incorrectly (ref: https://docs.litellm.ai/docs/completion/drop_params#specify-allowed-openai-params-in-a-request)Also noticed a lot of the smaller Qwen3 models no longer have any active provider on OpenRouter.
Summary by CodeRabbit
New Features
Bug Fixes