Skip to content

Configuration

llms.py uses two main configuration files stored in ~/.llms/:

  • llms.json - Provider and model configuration
  • ui.json - Web UI settings and system prompts

These files are automatically created with defaults when you run llms --init.

The main configuration file has the following structure:

{
"defaults": {
"headers": {},
"text": {},
"image": {},
"audio": {},
"file": {},
"check": {},
"limits": {},
"convert": {}
},
"providers": {}
}

Common HTTP headers for all requests:

"headers": {
"Content-Type": "application/json"
}

Default chat completion request for text prompts:

"text": {
"model": "kimi-k2",
"messages": [
{"role": "user", "content": ""}
]
}

Default request template for image prompts:

"image": {
"model": "gemini-2.5-flash",
"messages": [
{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": ""}},
{"type": "text", "text": "Describe the key features of the input image"}
]
}
]
}

Default request template for audio prompts:

"audio": {
"model": "gpt-4o-audio-preview",
"messages": [
{
"role": "user",
"content": [
{"type": "input_audio", "input_audio": {"data": "", "format": "mp3"}},
{"type": "text", "text": "Transcribe the audio"}
]
}
]
}

Default request template for file attachments:

"file": {
"model": "gpt-5",
"messages": [
{
"role": "user",
"content": [
{"type": "file", "file": {"filename": "", "file_data": ""}},
{"type": "text", "text": "Summarize the document"}
]
}
]
}

Each provider has the following structure:

"groq": {
"enabled": true,
"type": "OpenAiProvider",
"base_url": "https://api.groq.com/openai",
"api_key": "$GROQ_API_KEY",
"models": {
"kimi-k2": "moonshotai/kimi-k2-instruct-0905",
"llama3.3:70b": "llama-3.3-70b-versatile"
},
"pricing": {
"kimi-k2": {"input": 0.0, "output": 0.0}
},
"default_pricing": {"input": 0.0, "output": 0.0}
}
  • enabled: Whether the provider is active
  • type: Provider class (OpenAiProvider, GoogleProvider, OllamaProvider)
  • api_key: API key (use $VAR_NAME for environment variables)
  • base_url: API endpoint URL
  • models: Model name mappings (local name → provider name)
  • pricing: Cost per 1M tokens (input/output) for each model
  • default_pricing: Default pricing if not specified in pricing
Terminal window
# Enable providers
llms --enable openrouter groq google_free
# Disable providers
llms --disable ollama openai
Terminal window
# List all providers and models
llms ls
# List specific providers
llms ls groq anthropic
Terminal window
llms --default grok-4-fast

This updates defaults.text.model in your configuration.

Use a custom configuration file:

Terminal window
llms --config /path/to/custom-config.json "Hello"

To reset to defaults, delete your configuration:

Terminal window
rm -rf ~/.llms
llms --init