Configuration
Configuration Files
Section titled “Configuration Files”llms.py uses two main configuration files stored in ~/.llms/:
llms.json- Provider and model configurationui.json- Web UI settings and system prompts
These files are automatically created with defaults when you run llms --init.
llms.json Structure
Section titled “llms.json Structure”The main configuration file has the following structure:
{ "defaults": { "headers": {}, "text": {}, "image": {}, "audio": {}, "file": {}, "check": {}, "limits": {}, "convert": {} }, "providers": {}}Defaults Section
Section titled “Defaults Section”Headers
Section titled “Headers”Common HTTP headers for all requests:
"headers": { "Content-Type": "application/json"}Text Template
Section titled “Text Template”Default chat completion request for text prompts:
"text": { "model": "kimi-k2", "messages": [ {"role": "user", "content": ""} ]}Image Template
Section titled “Image Template”Default request template for image prompts:
"image": { "model": "gemini-2.5-flash", "messages": [ { "role": "user", "content": [ {"type": "image_url", "image_url": {"url": ""}}, {"type": "text", "text": "Describe the key features of the input image"} ] } ]}Audio Template
Section titled “Audio Template”Default request template for audio prompts:
"audio": { "model": "gpt-4o-audio-preview", "messages": [ { "role": "user", "content": [ {"type": "input_audio", "input_audio": {"data": "", "format": "mp3"}}, {"type": "text", "text": "Transcribe the audio"} ] } ]}File Template
Section titled “File Template”Default request template for file attachments:
"file": { "model": "gpt-5", "messages": [ { "role": "user", "content": [ {"type": "file", "file": {"filename": "", "file_data": ""}}, {"type": "text", "text": "Summarize the document"} ] } ]}Providers Section
Section titled “Providers Section”Each provider has the following structure:
"groq": { "enabled": true, "type": "OpenAiProvider", "base_url": "https://api.groq.com/openai", "api_key": "$GROQ_API_KEY", "models": { "kimi-k2": "moonshotai/kimi-k2-instruct-0905", "llama3.3:70b": "llama-3.3-70b-versatile" }, "pricing": { "kimi-k2": {"input": 0.0, "output": 0.0} }, "default_pricing": {"input": 0.0, "output": 0.0}}Provider Fields
Section titled “Provider Fields”enabled: Whether the provider is activetype: Provider class (OpenAiProvider,GoogleProvider,OllamaProvider)api_key: API key (use$VAR_NAMEfor environment variables)base_url: API endpoint URLmodels: Model name mappings (local name → provider name)pricing: Cost per 1M tokens (input/output) for each modeldefault_pricing: Default pricing if not specified inpricing
Managing Providers
Section titled “Managing Providers”Enable/Disable Providers
Section titled “Enable/Disable Providers”# Enable providersllms --enable openrouter groq google_free
# Disable providersllms --disable ollama openaiList Providers
Section titled “List Providers”# List all providers and modelsllms ls
# List specific providersllms ls groq anthropicSet Default Model
Section titled “Set Default Model”llms --default grok-4-fastThis updates defaults.text.model in your configuration.
Custom Configuration Path
Section titled “Custom Configuration Path”Use a custom configuration file:
llms --config /path/to/custom-config.json "Hello"Recreating Configuration
Section titled “Recreating Configuration”To reset to defaults, delete your configuration:
rm -rf ~/.llmsllms --initNext Steps
Section titled “Next Steps”- Providers Reference - Detailed provider information
- CLI Usage - Learn CLI commands
- Server Mode - Run as an API server