Skip to content

Troubleshooting

This guide covers common issues and their solutions when using llms.py.

Problem: pip install llms-py fails with errors

Solutions:

  1. Update pip:
Terminal window
pip install --upgrade pip
  1. Use Python 3.8+:
Terminal window
python --version # Should be 3.8 or higher
  1. Install in virtual environment:
Terminal window
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install llms-py

Problem: Config file not found error

Solution:

Terminal window
# Initialize default config
llms --init
# Or specify custom path
llms --config ./my-config.json

Problem: JSON parsing errors

Solution:

Terminal window
# Backup current config
cp ~/.llms/llms.json ~/.llms/llms.json.backup
# Recreate default config
rm ~/.llms/llms.json
llms --init

Problem: “No providers enabled” error

Solution:

Terminal window
# Check status
llms --list
# Enable providers
llms --enable groq google_free openrouter_free

Problem: Provider not available despite setting API key

Solutions:

  1. Check environment variable:
Terminal window
echo $GROQ_API_KEY
  1. Export in current shell:
Terminal window
export GROQ_API_KEY="gsk_..."
  1. Add to shell profile:
Terminal window
# Add to ~/.bashrc or ~/.zshrc
export GROQ_API_KEY="gsk_..."
# Reload
source ~/.bashrc
  1. Set in config file:
{
"providers": {
"groq": {
"api_key": "gsk_your_actual_key"
}
}
}

Problem: 401 or 403 errors

Solutions:

  1. Verify API key is correct
  2. Check API key hasn’t expired
  3. Verify account has credits/quota
  4. Test with provider’s official tools

Problem: Requests timing out

Solutions:

  1. Check internet connection
  2. Try different provider:
Terminal window
llms --disable slow_provider
llms --enable fast_provider
  1. Check provider status page
  2. Use verbose mode to see details:
Terminal window
llms --verbose "test"

Problem: “Model ‘xyz’ not found” error

Solutions:

  1. List available models:
Terminal window
llms ls
  1. Check provider is enabled:
Terminal window
llms ls groq # List groq models
  1. Enable provider that has the model:
Terminal window
llms --enable groq
  1. Use correct model name (check llms.json for mappings)

Problem: Model requests hang or fail

Solutions:

  1. Check provider status:
Terminal window
llms --check groq
  1. Try different model:
Terminal window
llms -m alternative-model "test"
  1. Check verbose logs:
Terminal window
llms --verbose -m problematic-model "test"

Problem: “Address already in use” error

Solutions:

  1. Use different port:
Terminal window
llms --serve 8001
  1. Kill process using port:
Terminal window
# Find process
lsof -i :8000
# Kill it
kill -9 <PID>

Problem: Can’t access http://localhost:8000

Solutions:

  1. Check server is running:
Terminal window
curl http://localhost:8000
  1. Check firewall settings

  2. Try 127.0.0.1 instead of localhost:

http://127.0.0.1:8000

Problem: Docker container exits immediately

Solutions:

  1. Check logs:
Terminal window
docker logs <container-id>
  1. Verify environment variables:
Terminal window
docker run -p 8000:8000 \
-e GROQ_API_KEY=$GROQ_API_KEY \
ghcr.io/servicestack/llms:latest
  1. Check API key is set:
Terminal window
echo $GROQ_API_KEY

Problem: Config not persisting

Solutions:

  1. Use named volume:
Terminal window
docker run -p 8000:8000 \
-v llms-data:/home/llms/.llms \
ghcr.io/servicestack/llms:latest
  1. Check permissions on local directory:
Terminal window
chmod -R 755 ./config

Problem: Image requests fail

Solutions:

  1. Check image format is supported (PNG, JPG, WEBP, etc.)

  2. Verify model supports vision:

Terminal window
llms -m gemini-2.5-flash --image test.jpg "test"
  1. Check image size (may need to resize large images)

  2. Try different vision model

Problem: Audio requests fail

Solutions:

  1. Check audio format (MP3, WAV supported)

  2. Verify model supports audio:

Terminal window
llms -m gpt-4o-audio-preview --audio test.mp3 "test"
  1. Try different audio model

Problem: PDF requests fail

Solutions:

  1. Verify model supports files:
Terminal window
llms -m gpt-5 --file test.pdf "test"
  1. Check PDF isn’t corrupted

  2. Try different file-capable model

Problem: Requests taking too long

Solutions:

  1. Use faster models:
Terminal window
llms -m gemini-2.5-flash "test" # Fast
llms -m llama3.3:70b "test" # Fast via Groq
  1. Check provider response times:
Terminal window
llms --check groq
  1. Use local models for speed:
Terminal window
llms --enable ollama
llms -m llama3.3 "test"
  1. Reduce max_tokens:
Terminal window
llms --args "max_completion_tokens=100" "test"

Enable verbose logging to diagnose issues:

Terminal window
llms --verbose --logprefix "[DEBUG] " "test query"

This shows:

  • Enabled providers
  • Model routing decisions
  • HTTP request/response details
  • Error messages with stack traces

If you’re still having issues:

  1. Check GitHub Issues: github.com/ServiceStack/llms/issues

  2. Create New Issue: Include:

    • llms.py version
    • Python version
    • Operating system
    • Full error message
    • Steps to reproduce
    • Verbose logs (with API keys redacted)
  3. Community Support: Join discussions on GitHub