260 lines
8.8 KiB
Markdown
260 lines
8.8 KiB
Markdown
# Shopify AI - Planning and Container Logs Fixes
|
|
|
|
## Summary
|
|
This document outlines the fixes applied to resolve issues with Mistral/Groq planning and container logs visibility.
|
|
|
|
## Issues Fixed
|
|
|
|
### 1. Groq Planning Not Working
|
|
**Problem:** When Groq was selected as the planning provider in the admin panel, no response was returned.
|
|
|
|
**Root Cause:**
|
|
- Incorrect API endpoint URL: `https://api.groq.ai/v1` (wrong domain)
|
|
- Incorrect API request format (not using OpenAI-compatible format)
|
|
- Wrong response parsing logic
|
|
|
|
**Solution:**
|
|
- ✅ Updated Groq API URL to `https://api.groq.com/openai/v1/chat/completions`
|
|
- ✅ Changed request format to OpenAI-compatible (model + messages in payload)
|
|
- ✅ Fixed response parsing to extract from `data.choices[0].message.content`
|
|
- ✅ Added comprehensive logging for debugging
|
|
- ✅ Set default model to `llama-3.3-70b-versatile`
|
|
- ✅ Added model chain with fallback models
|
|
|
|
### 2. Mistral Planning Issues
|
|
**Problem:** Mistral planning might not work properly due to missing model information.
|
|
|
|
**Root Cause:**
|
|
- The `sendMistralChat` function was not returning the model name in the response
|
|
|
|
**Solution:**
|
|
- ✅ Added `model` field to Mistral API response
|
|
- ✅ Ensured model information is tracked throughout the planning flow
|
|
|
|
### 3. Container Logs Not Visible
|
|
**Problem:** Even though extensive logging was added to the application, users couldn't see logs when using `docker logs`.
|
|
|
|
**Root Cause:**
|
|
- In `scripts/entrypoint.sh` line 187, the Node.js server output was redirected to a file:
|
|
```bash
|
|
node "$CHAT_APP_DIR/server.js" >/var/log/chat-service.log 2>&1 &
|
|
```
|
|
- This meant all `console.log()` and `console.error()` output was going to `/var/log/chat-service.log` inside the container
|
|
- Docker couldn't capture these logs because they weren't going to stdout/stderr
|
|
|
|
**Solution:**
|
|
- ✅ Removed the file redirection from entrypoint.sh
|
|
- ✅ Logs now go directly to stdout/stderr where Docker can capture them
|
|
- ✅ Users can now see all application logs using `docker logs <container_name>`
|
|
|
|
### 4. Missing Model Chain Support for Google and NVIDIA
|
|
**Problem:** Google and NVIDIA providers didn't have default model chains defined.
|
|
|
|
**Solution:**
|
|
- ✅ Added `buildGroqPlanChain()` with Llama and Mixtral models
|
|
- ✅ Added `buildGooglePlanChain()` with Gemini models
|
|
- ✅ Added `buildNvidiaPlanChain()` with Llama models
|
|
- ✅ Updated `defaultPlanningChainFromSettings()` to use provider-specific chains
|
|
- ✅ All providers now have proper fallback model chains
|
|
|
|
## Files Modified
|
|
|
|
1. **chat/server.js**
|
|
- Fixed Groq API implementation (lines ~3405-3450)
|
|
- Added model to Mistral API response (line ~3348)
|
|
- Added model to Google API response (line ~3402)
|
|
- Added model to NVIDIA API response (line ~3479)
|
|
- Added `buildGroqPlanChain()` function (lines ~3097-3104)
|
|
- Added `buildGooglePlanChain()` function (lines ~3106-3113)
|
|
- Added `buildNvidiaPlanChain()` function (lines ~3115-3120)
|
|
- Updated `defaultPlanningChainFromSettings()` (lines ~2007-2033)
|
|
|
|
2. **scripts/entrypoint.sh**
|
|
- Removed log file redirection (line 187)
|
|
- Changed from: `node ... >/var/log/chat-service.log 2>&1 &`
|
|
- Changed to: `node ... &`
|
|
|
|
## Testing Instructions
|
|
|
|
### Test 1: Verify Container Logs Work
|
|
```bash
|
|
# Start the container
|
|
docker-compose up -d
|
|
|
|
# Tail the logs - you should now see application output
|
|
docker logs -f shopify-ai-builder
|
|
|
|
# Look for logs like:
|
|
# [2024-01-11T...] Server started on http://0.0.0.0:4000
|
|
# [CONFIG] OpenRouter: { configured: true, ... }
|
|
# [CONFIG] Mistral: { configured: true, ... }
|
|
```
|
|
|
|
### Test 2: Verify Mistral Planning Works
|
|
1. Set your `MISTRAL_API_KEY` in environment variables
|
|
2. Go to Admin Panel → Plan models
|
|
3. Set "Mistral" as the primary planning provider
|
|
4. Save the configuration
|
|
5. Go to the builder and create a new project
|
|
6. Enter a planning request (e.g., "Create a WordPress plugin for contact forms")
|
|
7. Check that you receive a response
|
|
8. Check `docker logs` for Mistral-related logs with `[MISTRAL]` prefix
|
|
|
|
### Test 3: Verify Groq Planning Works
|
|
1. Set your `GROQ_API_KEY` in environment variables
|
|
2. Go to Admin Panel → Plan models
|
|
3. Set "Groq" as the primary planning provider
|
|
4. Save the configuration
|
|
5. Go to the builder and create a new project
|
|
6. Enter a planning request
|
|
7. Check that you receive a response
|
|
8. Check `docker logs` for Groq-related logs with `[GROQ]` prefix
|
|
|
|
### Test 4: Verify Provider Fallback
|
|
1. Configure multiple providers in the planning chain
|
|
2. Intentionally use an invalid API key for the first provider
|
|
3. Make a planning request
|
|
4. Verify that it automatically falls back to the next provider
|
|
5. Check logs to see the fallback chain in action
|
|
|
|
## Environment Variables Required
|
|
|
|
### For Mistral Planning
|
|
```env
|
|
MISTRAL_API_KEY=your_mistral_key_here
|
|
MISTRAL_API_URL=https://api.mistral.ai/v1/chat/completions # Optional, uses default if not set
|
|
```
|
|
|
|
### For Groq Planning
|
|
```env
|
|
GROQ_API_KEY=your_groq_key_here
|
|
GROQ_API_URL=https://api.groq.com/openai/v1/chat/completions # Optional, uses default if not set
|
|
```
|
|
|
|
### For Google Planning (if using)
|
|
```env
|
|
GOOGLE_API_KEY=your_google_key_here
|
|
```
|
|
|
|
### For NVIDIA Planning (if using)
|
|
```env
|
|
NVIDIA_API_KEY=your_nvidia_key_here
|
|
```
|
|
|
|
## Admin Panel Configuration
|
|
|
|
### Setting Up Planning Providers
|
|
|
|
1. **Access Admin Panel:**
|
|
- Navigate to `/admin/login`
|
|
- Log in with admin credentials
|
|
|
|
2. **Configure Planning Priority:**
|
|
- Go to "Plan models" section
|
|
- You'll see a list of planning models with priority
|
|
- Click "Add planning model" to add providers
|
|
- Drag to reorder (highest priority first)
|
|
- Each row should specify:
|
|
- Provider (openrouter, mistral, groq, google, nvidia)
|
|
- Model (optional - uses defaults if not specified)
|
|
|
|
3. **Configure Rate Limits:**
|
|
- Set tokens per minute/day limits per provider
|
|
- Set requests per minute/day limits per provider
|
|
- Monitor live usage in the same panel
|
|
|
|
## Default Models
|
|
|
|
### Groq
|
|
- Primary: `llama-3.3-70b-versatile`
|
|
- Fallback 1: `mixtral-8x7b-32768`
|
|
- Fallback 2: `llama-3.1-70b-versatile`
|
|
|
|
### Mistral
|
|
- Uses models configured in admin panel
|
|
- Default: `mistral-large-latest`
|
|
|
|
### Google
|
|
- Primary: `gemini-1.5-flash`
|
|
- Fallback 1: `gemini-1.5-pro`
|
|
- Fallback 2: `gemini-pro`
|
|
|
|
### NVIDIA
|
|
- Primary: `meta/llama-3.1-70b-instruct`
|
|
- Fallback: `meta/llama-3.1-8b-instruct`
|
|
|
|
## Logging Details
|
|
|
|
### Log Prefixes
|
|
All logs use consistent prefixes for easy filtering:
|
|
- `[MISTRAL]` - Mistral API operations
|
|
- `[GROQ]` - Groq API operations
|
|
- `[PLAN]` - Plan message handling
|
|
- `[CONFIG]` - Configuration at startup
|
|
|
|
### Viewing Specific Logs
|
|
```bash
|
|
# View only Mistral logs
|
|
docker logs shopify-ai-builder 2>&1 | grep "\[MISTRAL\]"
|
|
|
|
# View only Groq logs
|
|
docker logs shopify-ai-builder 2>&1 | grep "\[GROQ\]"
|
|
|
|
# View only planning logs
|
|
docker logs shopify-ai-builder 2>&1 | grep "\[PLAN\]"
|
|
|
|
# View configuration logs
|
|
docker logs shopify-ai-builder 2>&1 | grep "\[CONFIG\]"
|
|
```
|
|
|
|
## Verification Checklist
|
|
|
|
- [ ] Container logs are visible using `docker logs`
|
|
- [ ] Server startup logs show provider configuration
|
|
- [ ] Mistral planning requests return responses
|
|
- [ ] Groq planning requests return responses
|
|
- [ ] Provider fallback works when primary fails
|
|
- [ ] Admin panel shows all providers (openrouter, mistral, google, groq, nvidia)
|
|
- [ ] Rate limiting configuration works
|
|
- [ ] Usage statistics display correctly
|
|
|
|
## Known Limitations
|
|
|
|
1. **Google and NVIDIA APIs**: The current implementations use placeholder endpoints. These will need to be updated with the actual API endpoints and request formats if you plan to use them.
|
|
|
|
2. **Model Discovery**: Some providers may not support automatic model discovery. You may need to manually specify model names in the admin panel.
|
|
|
|
3. **API Key Validation**: API keys are not validated on configuration. Invalid keys will only be detected when making actual API calls.
|
|
|
|
## Troubleshooting
|
|
|
|
### Issue: Still No Logs Visible
|
|
**Solution:** Make sure to rebuild the container after pulling the changes:
|
|
```bash
|
|
docker-compose down
|
|
docker-compose build --no-cache
|
|
docker-compose up -d
|
|
```
|
|
|
|
### Issue: Planning Returns Error "API key not configured"
|
|
**Solution:** Ensure environment variables are properly set in your `.env` file or `docker-compose.yml`
|
|
|
|
### Issue: Planning Returns No Response
|
|
**Solution:**
|
|
1. Check container logs for detailed error messages
|
|
2. Verify API key is valid
|
|
3. Check if provider has rate limits or is down
|
|
4. Try configuring a fallback provider
|
|
|
|
### Issue: Groq Returns Invalid Model Error
|
|
**Solution:** The default models should work, but if you get this error, check Groq's documentation for current model names and update the model chain in admin panel.
|
|
|
|
## Support
|
|
|
|
If you encounter issues:
|
|
1. Check the container logs first: `docker logs -f shopify-ai-builder`
|
|
2. Look for error messages with provider-specific prefixes
|
|
3. Verify your API keys are valid
|
|
4. Check the admin panel configuration
|
|
5. Try the fallback chain with multiple providers configured
|