Skip to content

Quick Start

Set environment variables for the providers you want to use:

Terminal window
export OPENROUTER_API_KEY="sk-or-..."
export GROQ_API_KEY="gsk_..."
export GOOGLE_FREE_API_KEY="AIza..."

See the API Keys page for a complete list of supported providers.

Terminal window
llms --init

This creates a default configuration file at ~/.llms/llms.json.

Enable the providers you want to use:

Terminal window
llms --enable openrouter_free google_free groq

You can also enable premium providers:

Terminal window
llms --enable openai anthropic grok

See which models are available:

Terminal window
llms ls

Or list models for specific providers:

Terminal window
llms ls groq openrouter_free
Terminal window
# Simple question
llms "Explain quantum computing in simple terms"
# With specific model
llms -m grok-4-fast "jq command to sort openai models by created"
# With system prompt
llms -s "You are a quantum computing expert" "Explain quantum computing"

Run an OpenAI-compatible server with web UI:

Terminal window
llms --serve 8000

This launches:

  • Web UI at http://localhost:8000
  • OpenAI-compatible API at http://localhost:8000/v1/chat/completions
Terminal window
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "kimi-k2",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'