Skip to content

Model Aliases

Configure simple, friendly names for the 100+ LLM models available through LiteLLM.

What are Model Aliases?

Model Aliases are simple names that map to actual LLM provider models. They allow you to:

  • ✅ Hide complex provider-specific model names from clients
  • ✅ Create consistent naming across providers
  • ✅ Switch providers without changing client code
  • ✅ Offer tiered model options (fast, balanced, smart)
  • ✅ Control pricing and access per model

Built on LiteLLM

LiteLLM supports 100+ LLM providers. Model aliases let you expose these models with simple, friendly names instead of provider-specific formats like openai/gpt-4 or anthropic/claude-3-opus-20240229.

How Model Aliases Work

Client Code         Alias           LiteLLM Model                Provider
-----------         -----           -------------                --------
model: "gpt-4"  →   gpt-4      →   openai/gpt-4            →   OpenAI API
model: "claude"  →   claude     →   anthropic/claude-3-opus →   Anthropic API
model: "smart"   →   smart      →   openai/gpt-4            →   OpenAI API

Example Client Code:

# Client just uses simple alias
response = await client.chat(
    job_id=job_id,
    model="gpt-4",  # Simple, clean
    messages=[...]
)

What Actually Happens: 1. Client sends model: "gpt-4" 2. SaaS API looks up alias: gpt-4openai/gpt-4 3. LiteLLM routes to OpenAI with model gpt-4 4. Response flows back to client

Creating Model Aliases

Model Aliases Model Aliases interface - create user-facing model names with pricing

Via Admin Dashboard

  1. Navigate to Model Aliases
  2. Click "Model Management" → "Aliases"
  3. Click "Create Alias"

  4. Fill in Details

  5. Alias: Simple name (e.g., gpt-4)
  6. LiteLLM Model: Provider/model format (e.g., openai/gpt-4)
  7. Description: What this model is for
  8. Active: Enable/disable alias

  9. Save

  10. Click "Create"
  11. Alias is ready to use

Via API

curl -X POST http://localhost:8003/api/model-aliases/create \
  -H "Content-Type: application/json" \
  -d '{
    "alias": "gpt-4",
    "litellm_model": "openai/gpt-4",
    "description": "OpenAI GPT-4 - Most capable model",
    "active": true
  }'

Response:

{
  "alias": "gpt-4",
  "litellm_model": "openai/gpt-4",
  "description": "OpenAI GPT-4 - Most capable model",
  "active": true,
  "created_at": "2024-10-14T12:00:00Z"
}

LiteLLM Model Format

Model aliases must map to valid LiteLLM model names:

Format: provider/model-name

Common Providers

OpenAI:

openai/gpt-4
openai/gpt-4-turbo
openai/gpt-3.5-turbo
openai/gpt-4-vision-preview

Anthropic:

anthropic/claude-3-opus-20240229
anthropic/claude-3-sonnet-20240229
anthropic/claude-3-haiku-20240307
anthropic/claude-3-5-sonnet-20240620

Google:

gemini/gemini-pro
gemini/gemini-1.5-pro
gemini/gemini-1.5-flash

Azure OpenAI:

azure/gpt-4-deployment-name
azure/gpt-35-turbo-deployment-name

AWS Bedrock:

bedrock/anthropic.claude-3-opus-20240229-v1:0
bedrock/anthropic.claude-3-sonnet-20240229-v1:0
bedrock/meta.llama3-70b-instruct-v1:0

See all supported providers in LiteLLM docs

Alias Naming Strategies

Strategy 1: Provider-Based Naming

Keep provider in the alias name:

[
  {"alias": "openai-gpt4", "litellm_model": "openai/gpt-4"},
  {"alias": "openai-gpt35", "litellm_model": "openai/gpt-3.5-turbo"},
  {"alias": "claude-opus", "litellm_model": "anthropic/claude-3-opus-20240229"},
  {"alias": "claude-sonnet", "litellm_model": "anthropic/claude-3-sonnet-20240229"},
  {"alias": "gemini-pro", "litellm_model": "gemini/gemini-pro"}
]

Pros: - Clear which provider is being used - Easy to understand - Simple mapping

Cons: - Harder to switch providers later - Clients know provider details

Strategy 2: Generic/Abstract Naming

Hide provider details with abstract names:

[
  {"alias": "smart", "litellm_model": "openai/gpt-4"},
  {"alias": "balanced", "litellm_model": "openai/gpt-4-turbo"},
  {"alias": "fast", "litellm_model": "openai/gpt-3.5-turbo"},
  {"alias": "vision", "litellm_model": "openai/gpt-4-vision-preview"}
]

Pros: - Can switch providers transparently - Simple, memorable names - Provider-agnostic

Cons: - Less clear what model is actually used - May need documentation

Strategy 3: Tiered Naming

Name by capability/price tier:

[
  {"alias": "basic", "litellm_model": "openai/gpt-3.5-turbo"},
  {"alias": "professional", "litellm_model": "openai/gpt-4-turbo"},
  {"alias": "enterprise", "litellm_model": "openai/gpt-4"},
  {"alias": "premium", "litellm_model": "anthropic/claude-3-opus-20240229"}
]

Pros: - Maps to pricing plans - Easy for clients to choose - Clear value proposition

Cons: - Clients don't know actual model - May limit flexibility

Strategy 4: Use Case Naming

Name by intended application:

[
  {"alias": "chat", "litellm_model": "openai/gpt-3.5-turbo"},
  {"alias": "analysis", "litellm_model": "openai/gpt-4"},
  {"alias": "creative", "litellm_model": "anthropic/claude-3-opus-20240229"},
  {"alias": "code", "litellm_model": "openai/gpt-4"},
  {"alias": "vision", "litellm_model": "openai/gpt-4-vision-preview"}
]

Pros: - Guides clients to right model - Self-documenting - Clear purpose

Cons: - Same model may appear multiple times - Can be confusing

Combine strategies for clarity and flexibility:

[
  // Standard names (most common)
  {"alias": "gpt-4", "litellm_model": "openai/gpt-4"},
  {"alias": "gpt-3.5-turbo", "litellm_model": "openai/gpt-3.5-turbo"},
  {"alias": "claude-3-opus", "litellm_model": "anthropic/claude-3-opus-20240229"},
  {"alias": "claude-3-sonnet", "litellm_model": "anthropic/claude-3-sonnet-20240229"},

  // Convenience aliases
  {"alias": "fast", "litellm_model": "openai/gpt-3.5-turbo"},
  {"alias": "smart", "litellm_model": "openai/gpt-4"},

  // Use case aliases
  {"alias": "vision", "litellm_model": "openai/gpt-4-vision-preview"}
]

Common Model Alias Setups

Setup 1: OpenAI Only

# GPT-4
curl -X POST http://localhost:8003/api/model-aliases/create \
  -d '{
    "alias": "gpt-4",
    "litellm_model": "openai/gpt-4",
    "description": "Most capable GPT-4 model"
  }'

# GPT-4 Turbo
curl -X POST http://localhost:8003/api/model-aliases/create \
  -d '{
    "alias": "gpt-4-turbo",
    "litellm_model": "openai/gpt-4-turbo",
    "description": "Faster and cheaper GPT-4"
  }'

# GPT-3.5 Turbo
curl -X POST http://localhost:8003/api/model-aliases/create \
  -d '{
    "alias": "gpt-3.5-turbo",
    "litellm_model": "openai/gpt-3.5-turbo",
    "description": "Fast and efficient for most tasks"
  }'

Setup 2: Multi-Provider

# OpenAI GPT-4
curl -X POST http://localhost:8003/api/model-aliases/create \
  -d '{
    "alias": "gpt-4",
    "litellm_model": "openai/gpt-4",
    "description": "OpenAI GPT-4"
  }'

# Anthropic Claude 3 Opus
curl -X POST http://localhost:8003/api/model-aliases/create \
  -d '{
    "alias": "claude-3-opus",
    "litellm_model": "anthropic/claude-3-opus-20240229",
    "description": "Claude 3 Opus - Most capable Claude model"
  }'

# Google Gemini Pro
curl -X POST http://localhost:8003/api/model-aliases/create \
  -d '{
    "alias": "gemini-pro",
    "litellm_model": "gemini/gemini-pro",
    "description": "Google Gemini Pro"
  }'

Setup 3: Tiered Models

# Basic Tier
curl -X POST http://localhost:8003/api/model-aliases/create \
  -d '{
    "alias": "basic",
    "litellm_model": "openai/gpt-3.5-turbo",
    "description": "Fast, cost-effective model"
  }'

# Professional Tier
curl -X POST http://localhost:8003/api/model-aliases/create \
  -d '{
    "alias": "pro",
    "litellm_model": "openai/gpt-4-turbo",
    "description": "Balanced performance and cost"
  }'

# Enterprise Tier
curl -X POST http://localhost:8003/api/model-aliases/create \
  -d '{
    "alias": "enterprise",
    "litellm_model": "openai/gpt-4",
    "description": "Most capable model"
  }'

Managing Model Aliases

List All Aliases

curl http://localhost:8003/api/model-aliases

Response:

{
  "aliases": [
    {
      "alias": "gpt-4",
      "litellm_model": "openai/gpt-4",
      "description": "Most capable GPT-4 model",
      "active": true,
      "usage_count": 1543
    },
    {
      "alias": "claude-3-opus",
      "litellm_model": "anthropic/claude-3-opus-20240229",
      "description": "Claude 3 Opus",
      "active": true,
      "usage_count": 892
    }
  ]
}

View Alias Details

curl http://localhost:8003/api/model-aliases/gpt-4

Response:

{
  "alias": "gpt-4",
  "litellm_model": "openai/gpt-4",
  "description": "Most capable GPT-4 model",
  "active": true,
  "access_groups": ["gpt-models", "premium-models"],
  "teams_with_access": 15,
  "usage_stats": {
    "total_calls": 1543,
    "total_cost_usd": 234.56,
    "avg_tokens_per_call": 850
  }
}

Update Alias

curl -X PUT http://localhost:8003/api/model-aliases/gpt-4 \
  -H "Content-Type: application/json" \
  -d '{
    "description": "OpenAI GPT-4 - Most capable model (updated)",
    "active": true
  }'

Disable Alias

Temporarily disable without deleting:

curl -X PUT http://localhost:8003/api/model-aliases/old-model \
  -d '{
    "active": false
  }'

When disabled: - Teams can't use this model - API returns error if requested - Alias stays in system for future re-enabling

Delete Alias

Warning

Deleting an alias removes it from all access groups. Teams using this alias will lose access.

curl -X DELETE http://localhost:8003/api/model-aliases/deprecated-model

Switching Providers

One powerful use of aliases: switch providers without changing client code.

Example: Switch from OpenAI to Anthropic

Initial Setup:

curl -X POST http://localhost:8003/api/model-aliases/create \
  -d '{
    "alias": "smart",
    "litellm_model": "openai/gpt-4",
    "description": "Smart model for complex tasks"
  }'

Clients use it:

response = await client.chat(
    job_id=job_id,
    model="smart",  # Points to GPT-4
    messages=[...]
)

Later: Switch to Claude without client changes:

curl -X PUT http://localhost:8003/api/model-aliases/smart \
  -d '{
    "litellm_model": "anthropic/claude-3-opus-20240229",
    "description": "Smart model for complex tasks (now using Claude)"
  }'

Clients still use same code:

response = await client.chat(
    job_id=job_id,
    model="smart",  # Now points to Claude 3 Opus!
    messages=[...]
)

Adding New Models

When a new model is released:

1. Check LiteLLM Support

Verify the model is supported: - Check LiteLLM providers documentation - Look for model format (e.g., openai/gpt-4o)

2. Add API Keys (if new provider)

If it's a new provider, add credentials to LiteLLM config:

# litellm_config.yaml
model_list:
  - model_name: gpt-4o
    litellm_params:
      model: openai/gpt-4o
      api_key: os.environ/OPENAI_API_KEY

3. Create Model Alias

curl -X POST http://localhost:8003/api/model-aliases/create \
  -d '{
    "alias": "gpt-4o",
    "litellm_model": "openai/gpt-4o",
    "description": "GPT-4 Omni - Latest OpenAI model"
  }'

4. Add to Access Groups

curl -X POST http://localhost:8003/api/model-access-groups/gpt-models/add-models \
  -d '{
    "model_aliases": ["gpt-4o"]
  }'

5. Notify Teams

Inform clients about new model availability:

New Model Available: gpt-4o

We've added OpenAI's latest GPT-4 Omni model to your available models.

To use it, simply specify: model="gpt-4o" in your API calls.

Benefits:
- Faster response times
- Improved reasoning
- Better at complex tasks

Try it today!

Complete Client Onboarding Example

# 1. Create model aliases
curl -X POST http://localhost:8003/api/model-aliases/create \
  -d '{
    "alias": "gpt-4",
    "litellm_model": "openai/gpt-4",
    "description": "GPT-4 for complex tasks"
  }'

curl -X POST http://localhost:8003/api/model-aliases/create \
  -d '{
    "alias": "gpt-3.5-turbo",
    "litellm_model": "openai/gpt-3.5-turbo",
    "description": "Fast model for simple tasks"
  }'

# 2. Create access group
curl -X POST http://localhost:8003/api/model-access-groups/create \
  -d '{
    "group_name": "starter-models",
    "description": "Models for starter plan",
    "model_aliases": ["gpt-3.5-turbo"]
  }'

# 3. Create organization
curl -X POST http://localhost:8003/api/organizations/create \
  -d '{
    "organization_id": "org_newclient",
    "name": "New Client Inc"
  }'

# 4. Create team
curl -X POST http://localhost:8003/api/teams/create \
  -d '{
    "organization_id": "org_newclient",
    "team_id": "newclient-prod",
    "team_alias": "Production",
    "access_groups": ["starter-models"],
    "credits_allocated": 1000
  }'

# 5. Client can now use model aliases
# In their code: model="gpt-3.5-turbo"

Best Practices

Naming

  1. Use Standard Names When Possible
  2. gpt-4, claude-3-opus, gemini-pro
  3. Familiar to developers
  4. Easy to remember

  5. Be Consistent

  6. If you use hyphens, use them everywhere
  7. Stick to lowercase
  8. Follow a naming pattern

  9. Avoid Version Numbers in Aliases

    # ❌ Bad: Hard to maintain
    alias: "gpt-4-0613"
    
    # ✅ Good: Can update underlying model
    alias: "gpt-4"
    litellm_model: "openai/gpt-4-0613"
    

Organization

  1. Group Related Models
  2. All OpenAI models together
  3. All fast models together
  4. All vision models together

  5. Use Descriptions

  6. Explain what the model is good for
  7. Mention speed/cost trade-offs
  8. Note any special capabilities

  9. Track Usage

  10. Monitor which models are popular
  11. Identify underused models
  12. Optimize based on actual usage

Maintenance

  1. Regular Updates
  2. Update to newer model versions
  3. Deprecate old models gradually
  4. Test new models before rolling out

  5. Communicate Changes

  6. Notify teams before removing models
  7. Provide migration guides
  8. Offer grace periods

  9. Monitor Costs

  10. Track spend per model
  11. Identify expensive models
  12. Adjust pricing if needed

Troubleshooting

Invalid Model Error

Problem: "Model not found" or "Invalid model"

Solutions: 1. Verify alias exists:

curl http://localhost:8003/api/model-aliases/gpt-4

  1. Check alias is active:

    curl http://localhost:8003/api/model-aliases/gpt-4
    # Should show "active": true
    

  2. Verify team has access:

    curl http://localhost:8003/api/teams/acme-prod
    # Check access_groups include group with this model
    

LiteLLM Routing Error

Problem: "Provider authentication failed" or routing error

Solutions: 1. Check LiteLLM config has provider credentials 2. Verify litellm_model format is correct 3. Test model directly via LiteLLM:

curl http://localhost:8002/chat/completions \
  -d '{"model": "openai/gpt-4", "messages": [...]}'

Alias Already Exists

Problem: "Alias already exists"

Solutions: 1. Use different alias name 2. Update existing alias instead:

curl -X PUT http://localhost:8003/api/model-aliases/gpt-4 \
  -d '{"litellm_model": "openai/gpt-4-turbo"}'

Next Steps

Now that you understand model aliases:

  1. Create Access Groups - Group aliases for team access
  2. Assign to Teams - Give teams access to models
  3. Monitor Usage - Track which models are used
  4. Review LiteLLM Docs - See all supported models

Quick Reference

Create Alias

POST /api/model-aliases/create
{
  "alias": "gpt-4",
  "litellm_model": "openai/gpt-4",
  "description": "GPT-4 model",
  "active": true
}

List Aliases

GET /api/model-aliases

Update Alias

PUT /api/model-aliases/{alias}
{
  "description": "Updated description",
  "litellm_model": "openai/gpt-4-turbo"
}

Disable Alias

PUT /api/model-aliases/{alias}
{
  "active": false
}

Delete Alias

DELETE /api/model-aliases/{alias}