Quick Start
1. Set API Keys
Section titled “1. Set API Keys”Set environment variables for the providers you want to use:
export OPENROUTER_API_KEY="sk-or-..."export GROQ_API_KEY="gsk_..."export GOOGLE_FREE_API_KEY="AIza..."See the API Keys page for a complete list of supported providers.
2. Initialize Configuration
Section titled “2. Initialize Configuration”llms --initThis creates a default configuration file at ~/.llms/llms.json.
3. Enable Providers
Section titled “3. Enable Providers”Enable the providers you want to use:
llms --enable openrouter_free google_free groqYou can also enable premium providers:
llms --enable openai anthropic grok4. List Available Models
Section titled “4. List Available Models”See which models are available:
llms lsOr list models for specific providers:
llms ls groq openrouter_free5. Start Chatting
Section titled “5. Start Chatting”CLI Usage
Section titled “CLI Usage”# Simple questionllms "Explain quantum computing in simple terms"
# With specific modelllms -m grok-4-fast "jq command to sort openai models by created"
# With system promptllms -s "You are a quantum computing expert" "Explain quantum computing"Start the Server
Section titled “Start the Server”Run an OpenAI-compatible server with web UI:
llms --serve 8000This launches:
- Web UI at
http://localhost:8000 - OpenAI-compatible API at
http://localhost:8000/v1/chat/completions
Use the API
Section titled “Use the API”curl -X POST http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "kimi-k2", "messages": [ {"role": "user", "content": "Hello!"} ] }'Next Steps
Section titled “Next Steps”- Configuration Guide - Customize your setup
- CLI Usage - Learn all CLI commands
- Web UI Features - Explore the web interface
- Image Support - Use images with vision models
- Audio Support - Process audio files
- File Support - Work with PDFs and documents