Fixes remeda mergeDeep error that was causing all OpenCode CLI commands
to fail with 'Wrong number of arguments' before processing any messages.
The bug was introduced in commit 2536784 where providerPreference support
was added, passing 3 arguments to mergeDeep which only accepts 2.
Changed from:
mergeDeep(A, B, C)
To:
mergeDeep(mergeDeep(A, B), C)
This preserves the provider preference functionality while fixing the crash.
Implement provider preference feature for OpenRouter models:
1. Database Schema:
- Add ModelProviderPreferenceTable to store provider preferences per model
2. Console Core API:
- Add ProviderPreferenceSchema for validation
- Add setProviderPreference(), getProviderPreference(), deleteProviderPreference(), and listProviderPreferences() functions
3. Request Handler:
- Modify handler.ts to fetch provider preferences from database
- Pass provider preferences to OpenRouter in request body
4. Admin UI:
- Add provider preference dropdown for OpenRouter models in model-section.tsx
- Add 'Allow fallbacks' checkbox for fallback control
- Add corresponding CSS styles
5. CLI Config:
- Add providerPreference field to Provider model config schema
- Support configuring provider preferences via opencode.json
6. Provider SDK:
- Inject provider preferences into OpenRouter request body in fetch wrapper
This allows workspace admins to specify preferred providers for OpenRouter
models, ensuring models like Claude route through specific providers
(e.g., Anthropic direct) rather than third-party providers.
Add full support for Kilo Gateway AI provider across the codebase:
- Add custom loader for 'kilo' provider in provider.ts with API key detection
- Add auth CLI hint for Kilo Gateway in auth.ts
- Add Kilo to DEFAULT_PROVIDER_SEEDS in chat server.js
- Add Kilo to PLANNING_PROVIDERS for planning chain support
- Add Kilo provider configuration with baseURL and API key support
- Update admin env config to show Kilo API key status
Kilo Gateway provides access to 500+ AI models through a single
OpenAI-compatible endpoint at https://api.kilo.ai/api/gateway
The debug logging added in commit dacde39 tried to access 'env' variable
before it was initialized at line 946. Changed to use process.env which
is always available.
Chutes AI counts each HTTP API request separately. The existing fix using
stepCountIs(1) only limited the Vercel AI SDK's internal loop, but the
outer while(true) loops in processor.ts and prompt.ts continued to make
additional HTTP requests after tool execution.
This fix:
- Returns singleStepTools flag from LLM.stream() to signal single-step mode
- Breaks out of processor.ts inner loop after one iteration for Chutes
- Breaks out of prompt.ts outer loop after one iteration for Chutes
This ensures only one HTTP request is made per user message for providers
like Chutes that bill per request.
Documents the issue where tool calls count as separate Chutes AI requests,
proposed solutions, technical analysis, and user concerns about breaking
sequential workflows.
Includes:
- Root cause analysis of Vercel AI SDK multi-step execution
- 4 proposed solution options with pros/cons
- User concerns about model context and workflow breaks
- Code references and technical diagrams
- Recommended next steps for testing and implementation
Relates to: Tool call execution flow in session management
- Change openrouter/pony-alpha model status from 'alpha' to 'beta' to prevent deletion
- Fix ReferenceError where heartbeat was used before initialization in cleanupStream
- Declare heartbeat and streamTimeout with let before cleanupStream function
- Change const assignments to let assignments for timer variables
Add detailed logging to provider initialization to help debug why Chutes
and other providers aren't loading:
- Log all providers found in database
- Log API key env vars detected
- Log custom loader results with autoload status
- Log final loaded providers count
This will help identify if the issue is:
1. Database not loading correctly
2. Missing env vars
3. Custom loaders not being called
4. Providers being filtered out
- Add openrouter/pony-alpha model to models-api.json fixture
- Fix getModel() to lookup models with provider prefix (e.g., openrouter/pony-alpha)
When user specifies openrouter/pony-alpha, the code now correctly looks for
the full model ID including prefix in the provider's models object
This fixes the 'ModelNotFoundError' when using OpenRouter models that have
prefixed IDs in the database.
- Add comprehensive prompt injection security module with 160+ attack pattern detection
- Implement security checks in message handling with proper blocking and user feedback
- Add OpenRouter paid API key support (OPENROUTER_PAID_API_KEY) for premium models
- Update model discovery and chat functions to use paid API key for premium models
- Add comprehensive test suite with 434 test cases (98.39% accuracy)
- Tests cover legitimate WordPress development queries, injection attacks, obfuscated attempts
- Improve builder loading indicators with text-based progress (building/planning)
- Replace spinning animations with 'Starting build/planning process' messages