8.8 KiB
Shopify AI - Planning and Container Logs Fixes
Summary
This document outlines the fixes applied to resolve issues with Mistral/Groq planning and container logs visibility.
Issues Fixed
1. Groq Planning Not Working
Problem: When Groq was selected as the planning provider in the admin panel, no response was returned.
Root Cause:
- Incorrect API endpoint URL:
https://api.groq.ai/v1(wrong domain) - Incorrect API request format (not using OpenAI-compatible format)
- Wrong response parsing logic
Solution:
- ✅ Updated Groq API URL to
https://api.groq.com/openai/v1/chat/completions - ✅ Changed request format to OpenAI-compatible (model + messages in payload)
- ✅ Fixed response parsing to extract from
data.choices[0].message.content - ✅ Added comprehensive logging for debugging
- ✅ Set default model to
llama-3.3-70b-versatile - ✅ Added model chain with fallback models
2. Mistral Planning Issues
Problem: Mistral planning might not work properly due to missing model information.
Root Cause:
- The
sendMistralChatfunction was not returning the model name in the response
Solution:
- ✅ Added
modelfield to Mistral API response - ✅ Ensured model information is tracked throughout the planning flow
3. Container Logs Not Visible
Problem: Even though extensive logging was added to the application, users couldn't see logs when using docker logs.
Root Cause:
- In
scripts/entrypoint.shline 187, the Node.js server output was redirected to a file:node "$CHAT_APP_DIR/server.js" >/var/log/chat-service.log 2>&1 & - This meant all
console.log()andconsole.error()output was going to/var/log/chat-service.loginside the container - Docker couldn't capture these logs because they weren't going to stdout/stderr
Solution:
- ✅ Removed the file redirection from entrypoint.sh
- ✅ Logs now go directly to stdout/stderr where Docker can capture them
- ✅ Users can now see all application logs using
docker logs <container_name>
4. Missing Model Chain Support for Google and NVIDIA
Problem: Google and NVIDIA providers didn't have default model chains defined.
Solution:
- ✅ Added
buildGroqPlanChain()with Llama and Mixtral models - ✅ Added
buildGooglePlanChain()with Gemini models - ✅ Added
buildNvidiaPlanChain()with Llama models - ✅ Updated
defaultPlanningChainFromSettings()to use provider-specific chains - ✅ All providers now have proper fallback model chains
Files Modified
-
chat/server.js
- Fixed Groq API implementation (lines ~3405-3450)
- Added model to Mistral API response (line ~3348)
- Added model to Google API response (line ~3402)
- Added model to NVIDIA API response (line ~3479)
- Added
buildGroqPlanChain()function (lines ~3097-3104) - Added
buildGooglePlanChain()function (lines ~3106-3113) - Added
buildNvidiaPlanChain()function (lines ~3115-3120) - Updated
defaultPlanningChainFromSettings()(lines ~2007-2033)
-
scripts/entrypoint.sh
- Removed log file redirection (line 187)
- Changed from:
node ... >/var/log/chat-service.log 2>&1 & - Changed to:
node ... &
Testing Instructions
Test 1: Verify Container Logs Work
# Start the container
docker-compose up -d
# Tail the logs - you should now see application output
docker logs -f shopify-ai-builder
# Look for logs like:
# [2024-01-11T...] Server started on http://0.0.0.0:4000
# [CONFIG] OpenRouter: { configured: true, ... }
# [CONFIG] Mistral: { configured: true, ... }
Test 2: Verify Mistral Planning Works
- Set your
MISTRAL_API_KEYin environment variables - Go to Admin Panel → Plan models
- Set "Mistral" as the primary planning provider
- Save the configuration
- Go to the builder and create a new project
- Enter a planning request (e.g., "Create a WordPress plugin for contact forms")
- Check that you receive a response
- Check
docker logsfor Mistral-related logs with[MISTRAL]prefix
Test 3: Verify Groq Planning Works
- Set your
GROQ_API_KEYin environment variables - Go to Admin Panel → Plan models
- Set "Groq" as the primary planning provider
- Save the configuration
- Go to the builder and create a new project
- Enter a planning request
- Check that you receive a response
- Check
docker logsfor Groq-related logs with[GROQ]prefix
Test 4: Verify Provider Fallback
- Configure multiple providers in the planning chain
- Intentionally use an invalid API key for the first provider
- Make a planning request
- Verify that it automatically falls back to the next provider
- Check logs to see the fallback chain in action
Environment Variables Required
For Mistral Planning
MISTRAL_API_KEY=your_mistral_key_here
MISTRAL_API_URL=https://api.mistral.ai/v1/chat/completions # Optional, uses default if not set
For Groq Planning
GROQ_API_KEY=your_groq_key_here
GROQ_API_URL=https://api.groq.com/openai/v1/chat/completions # Optional, uses default if not set
For Google Planning (if using)
GOOGLE_API_KEY=your_google_key_here
For NVIDIA Planning (if using)
NVIDIA_API_KEY=your_nvidia_key_here
Admin Panel Configuration
Setting Up Planning Providers
-
Access Admin Panel:
- Navigate to
/admin/login - Log in with admin credentials
- Navigate to
-
Configure Planning Priority:
- Go to "Plan models" section
- You'll see a list of planning models with priority
- Click "Add planning model" to add providers
- Drag to reorder (highest priority first)
- Each row should specify:
- Provider (openrouter, mistral, groq, google, nvidia)
- Model (optional - uses defaults if not specified)
-
Configure Rate Limits:
- Set tokens per minute/day limits per provider
- Set requests per minute/day limits per provider
- Monitor live usage in the same panel
Default Models
Groq
- Primary:
llama-3.3-70b-versatile - Fallback 1:
mixtral-8x7b-32768 - Fallback 2:
llama-3.1-70b-versatile
Mistral
- Uses models configured in admin panel
- Default:
mistral-large-latest
- Primary:
gemini-1.5-flash - Fallback 1:
gemini-1.5-pro - Fallback 2:
gemini-pro
NVIDIA
- Primary:
meta/llama-3.1-70b-instruct - Fallback:
meta/llama-3.1-8b-instruct
Logging Details
Log Prefixes
All logs use consistent prefixes for easy filtering:
[MISTRAL]- Mistral API operations[GROQ]- Groq API operations[PLAN]- Plan message handling[CONFIG]- Configuration at startup
Viewing Specific Logs
# View only Mistral logs
docker logs shopify-ai-builder 2>&1 | grep "\[MISTRAL\]"
# View only Groq logs
docker logs shopify-ai-builder 2>&1 | grep "\[GROQ\]"
# View only planning logs
docker logs shopify-ai-builder 2>&1 | grep "\[PLAN\]"
# View configuration logs
docker logs shopify-ai-builder 2>&1 | grep "\[CONFIG\]"
Verification Checklist
- Container logs are visible using
docker logs - Server startup logs show provider configuration
- Mistral planning requests return responses
- Groq planning requests return responses
- Provider fallback works when primary fails
- Admin panel shows all providers (openrouter, mistral, google, groq, nvidia)
- Rate limiting configuration works
- Usage statistics display correctly
Known Limitations
-
Google and NVIDIA APIs: The current implementations use placeholder endpoints. These will need to be updated with the actual API endpoints and request formats if you plan to use them.
-
Model Discovery: Some providers may not support automatic model discovery. You may need to manually specify model names in the admin panel.
-
API Key Validation: API keys are not validated on configuration. Invalid keys will only be detected when making actual API calls.
Troubleshooting
Issue: Still No Logs Visible
Solution: Make sure to rebuild the container after pulling the changes:
docker-compose down
docker-compose build --no-cache
docker-compose up -d
Issue: Planning Returns Error "API key not configured"
Solution: Ensure environment variables are properly set in your .env file or docker-compose.yml
Issue: Planning Returns No Response
Solution:
- Check container logs for detailed error messages
- Verify API key is valid
- Check if provider has rate limits or is down
- Try configuring a fallback provider
Issue: Groq Returns Invalid Model Error
Solution: The default models should work, but if you get this error, check Groq's documentation for current model names and update the model chain in admin panel.
Support
If you encounter issues:
- Check the container logs first:
docker logs -f shopify-ai-builder - Look for error messages with provider-specific prefixes
- Verify your API keys are valid
- Check the admin panel configuration
- Try the fallback chain with multiple providers configured