Troubleshooting
This guide covers common issues and their solutions when using llms.py.
Installation Issues
Section titled “Installation Issues”pip install fails
Section titled “pip install fails”Problem: pip install llms-py fails with errors
Solutions:
- Update pip:
pip install --upgrade pip- Use Python 3.8+:
python --version # Should be 3.8 or higher- Install in virtual environment:
python -m venv venvsource venv/bin/activate # On Windows: venv\Scripts\activatepip install llms-pyConfiguration Issues
Section titled “Configuration Issues”Config file not found
Section titled “Config file not found”Problem: Config file not found error
Solution:
# Initialize default configllms --init
# Or specify custom pathllms --config ./my-config.jsonConfig file corrupted
Section titled “Config file corrupted”Problem: JSON parsing errors
Solution:
# Backup current configcp ~/.llms/llms.json ~/.llms/llms.json.backup
# Recreate default configrm ~/.llms/llms.jsonllms --initProvider Issues
Section titled “Provider Issues”No providers enabled
Section titled “No providers enabled”Problem: “No providers enabled” error
Solution:
# Check statusllms --list
# Enable providersllms --enable groq google_free openrouter_freeAPI key not recognized
Section titled “API key not recognized”Problem: Provider not available despite setting API key
Solutions:
- Check environment variable:
echo $GROQ_API_KEY- Export in current shell:
export GROQ_API_KEY="gsk_..."- Add to shell profile:
# Add to ~/.bashrc or ~/.zshrcexport GROQ_API_KEY="gsk_..."
# Reloadsource ~/.bashrc- Set in config file:
{ "providers": { "groq": { "api_key": "gsk_your_actual_key" } }}Provider authentication failed
Section titled “Provider authentication failed”Problem: 401 or 403 errors
Solutions:
- Verify API key is correct
- Check API key hasn’t expired
- Verify account has credits/quota
- Test with provider’s official tools
Provider timeout
Section titled “Provider timeout”Problem: Requests timing out
Solutions:
- Check internet connection
- Try different provider:
llms --disable slow_providerllms --enable fast_provider- Check provider status page
- Use verbose mode to see details:
llms --verbose "test"Model Issues
Section titled “Model Issues”Model not found
Section titled “Model not found”Problem: “Model ‘xyz’ not found” error
Solutions:
- List available models:
llms ls- Check provider is enabled:
llms ls groq # List groq models- Enable provider that has the model:
llms --enable groq- Use correct model name (check
llms.jsonfor mappings)
Model not responding
Section titled “Model not responding”Problem: Model requests hang or fail
Solutions:
- Check provider status:
llms --check groq- Try different model:
llms -m alternative-model "test"- Check verbose logs:
llms --verbose -m problematic-model "test"Server Issues
Section titled “Server Issues”Port already in use
Section titled “Port already in use”Problem: “Address already in use” error
Solutions:
- Use different port:
llms --serve 8001- Kill process using port:
# Find processlsof -i :8000
# Kill itkill -9 <PID>Server not accessible
Section titled “Server not accessible”Problem: Can’t access http://localhost:8000
Solutions:
- Check server is running:
curl http://localhost:8000-
Check firewall settings
-
Try 127.0.0.1 instead of localhost:
http://127.0.0.1:8000Docker Issues
Section titled “Docker Issues”Container won’t start
Section titled “Container won’t start”Problem: Docker container exits immediately
Solutions:
- Check logs:
docker logs <container-id>- Verify environment variables:
docker run -p 8000:8000 \ -e GROQ_API_KEY=$GROQ_API_KEY \ ghcr.io/servicestack/llms:latest- Check API key is set:
echo $GROQ_API_KEYVolume mount issues
Section titled “Volume mount issues”Problem: Config not persisting
Solutions:
- Use named volume:
docker run -p 8000:8000 \ -v llms-data:/home/llms/.llms \ ghcr.io/servicestack/llms:latest- Check permissions on local directory:
chmod -R 755 ./configMulti-Modal Issues
Section titled “Multi-Modal Issues”Image not processing
Section titled “Image not processing”Problem: Image requests fail
Solutions:
-
Check image format is supported (PNG, JPG, WEBP, etc.)
-
Verify model supports vision:
llms -m gemini-2.5-flash --image test.jpg "test"-
Check image size (may need to resize large images)
-
Try different vision model
Audio not processing
Section titled “Audio not processing”Problem: Audio requests fail
Solutions:
-
Check audio format (MP3, WAV supported)
-
Verify model supports audio:
llms -m gpt-4o-audio-preview --audio test.mp3 "test"- Try different audio model
PDF not processing
Section titled “PDF not processing”Problem: PDF requests fail
Solutions:
- Verify model supports files:
llms -m gpt-5 --file test.pdf "test"-
Check PDF isn’t corrupted
-
Try different file-capable model
Performance Issues
Section titled “Performance Issues”Slow responses
Section titled “Slow responses”Problem: Requests taking too long
Solutions:
- Use faster models:
llms -m gemini-2.5-flash "test" # Fastllms -m llama3.3:70b "test" # Fast via Groq- Check provider response times:
llms --check groq- Use local models for speed:
llms --enable ollamallms -m llama3.3 "test"- Reduce max_tokens:
llms --args "max_completion_tokens=100" "test"Debug Mode
Section titled “Debug Mode”Enable verbose logging to diagnose issues:
llms --verbose --logprefix "[DEBUG] " "test query"This shows:
- Enabled providers
- Model routing decisions
- HTTP request/response details
- Error messages with stack traces
Getting Help
Section titled “Getting Help”If you’re still having issues:
-
Check GitHub Issues: github.com/ServiceStack/llms/issues
-
Create New Issue: Include:
- llms.py version
- Python version
- Operating system
- Full error message
- Steps to reproduce
- Verbose logs (with API keys redacted)
-
Community Support: Join discussions on GitHub
Next Steps
Section titled “Next Steps”- Configuration - Detailed configuration
- Providers - Provider-specific details
- CLI Usage - Command reference