Testing Overview¶
Testing Philosophy¶
The SaasLiteLLM platform follows a pragmatic testing approach that balances comprehensive coverage with practical development needs:
- Functional Testing First: Focus on integration tests that verify end-to-end functionality
- Real-World Scenarios: Tests reflect actual use cases (team creation, job tracking, credit management)
- Database-Centric: Most tests validate database interactions and API responses
- Developer-Friendly: Tests are easy to run and provide clear feedback
Testing Strategy¶
Current Testing Approach¶
The project currently emphasizes integration and functional testing over unit testing. This approach is well-suited for a SaaS platform where:
- API Contracts Matter: Testing complete request/response flows is more valuable than isolated unit tests
- Database Interactions: Most business logic involves database operations
- Multi-Service Architecture: Testing the interaction between LiteLLM proxy and SaaS API wrapper
What Gets Tested¶
- API Endpoints: All REST API endpoints for teams, organizations, model groups, and credits
- Database Operations: CRUD operations and data integrity
- LiteLLM Integration: Team creation, virtual key generation, and model assignment
- Credit System: Credit allocation, deduction, and balance tracking
- Job Tracking: Job creation, LLM call tracking, and cost aggregation
Test Types¶
1. Unit Tests¶
Location: /tests/ Purpose: Test individual components and modules in isolation
Current Coverage: - Basic import tests (test_main.py) - Settings and configuration loading
Running Unit Tests:
# Activate virtual environment
source .venv/bin/activate
# Install test dependencies
uv pip install pytest pytest-asyncio
# Run all unit tests
pytest tests/
# Run with verbose output
pytest tests/ -v
# Run specific test file
pytest tests/test_main.py
# Run with coverage
pytest tests/ --cov=src --cov-report=html
2. Integration Tests¶
Location: /scripts/ Purpose: Test complete workflows across multiple services
Test Scripts:
test_minimal_version.py¶
Tests core functionality without LiteLLM integration: - Health checks - Organization creation - Model group setup - Team creation with credits - Credit management operations
test_full_integration.py¶
Tests complete LiteLLM integration: - Team creation in LiteLLM - Virtual key generation - Model group assignment - Credit allocation - Database synchronization
See Integration Tests for detailed documentation.
3. End-to-End Tests¶
Purpose: Simulate real user workflows from start to finish
Scenarios: - Complete job lifecycle (create → execute → complete) - Team onboarding workflow - Multi-call job with cost tracking - Credit exhaustion scenarios
Running E2E Tests:
# Ensure all services are running
./scripts/docker_setup.sh
python scripts/start_local.py # Terminal 1
python scripts/start_saas_api.py # Terminal 2
# Run integration test (acts as E2E test)
python scripts/test_full_integration.py
Testing Tools¶
Primary Tools¶
pytest¶
- Version: 7.4.0+
- Purpose: Test framework and runner
- Configuration: No pytest.ini currently; uses defaults
- Key Features:
- Simple assertion syntax
- Automatic test discovery
- Rich failure reporting
- Fixture support for setup/teardown
pytest-asyncio¶
- Purpose: Testing async FastAPI endpoints
- Usage: Handles async test functions and fixtures
Supporting Tools¶
requests¶
- Purpose: HTTP API testing
- Usage: All integration tests use requests library to call API endpoints
- Example:
Docker & Docker Compose¶
- Purpose: Test environment setup
- Services:
- PostgreSQL (database)
- Redis (caching)
- LiteLLM proxy
- SaaS API
Code Quality Tools¶
black¶
- Purpose: Code formatting
- Configuration: Line length 88 (Python standard)
- Usage:
ruff¶
- Purpose: Fast Python linter
- Usage:
mypy (optional)¶
- Purpose: Static type checking
- Usage:
Running Tests¶
Prerequisites¶
-
Install Dependencies:
-
Start Docker Services:
-
Start Application Services:
Quick Test Commands¶
# Run all unit tests
pytest tests/
# Run unit tests with verbose output
pytest tests/ -v
# Run specific test file
pytest tests/test_main.py
# Run integration tests
python scripts/test_minimal_version.py
python scripts/test_full_integration.py
# Run with pytest verbose mode
pytest tests/ -vv
# Run tests matching pattern
pytest tests/ -k "test_import"
# Show print statements during tests
pytest tests/ -s
Test Output¶
Successful test output example:
============================= test session starts ==============================
platform darwin -- Python 3.12.0, pytest-7.4.0, pluggy-1.0.0
rootdir: /Users/keithelliott/repos/SaasLiteLLM
collected 2 items
tests/test_main.py .. [100%]
============================== 2 passed in 0.45s ===============================
Test Organization¶
Directory Structure¶
SaasLiteLLM/
├── tests/ # Unit tests
│ ├── __init__.py
│ └── test_main.py # Basic import and settings tests
│
├── scripts/ # Integration test scripts
│ ├── test_minimal_version.py # Core functionality tests
│ └── test_full_integration.py # Full LiteLLM integration tests
│
└── src/ # Application code
├── saas_api.py # Main SaaS API
├── models/ # Database models
└── api/ # API endpoints
Test Naming Conventions¶
- Unit test files:
test_*.pyintests/directory - Unit test functions:
test_*prefix (e.g.,test_import_main) - Integration scripts:
test_*.pyinscripts/directory - Test descriptions: Clear docstrings explaining what is tested
Best Practices¶
Writing Tests¶
-
Clear Test Names: Use descriptive names that explain what is being tested
-
Arrange-Act-Assert Pattern:
-
Handle Edge Cases: Test both success and failure scenarios
-
Use Meaningful Assertions: Provide helpful error messages
Test Data Management¶
-
Use Consistent Test IDs: Prefix test entities with
test_or use unique identifiers -
Clean Up Test Data: Handle cases where test data already exists
- Isolate Tests: Each test should be independent and not rely on others
Database Testing¶
- Use Test Database: Ensure tests run against local development database
- Check Data Persistence: Verify data is correctly saved and retrievable
- Test Transactions: Ensure database operations are atomic
Coverage Goals¶
While the project doesn't currently enforce strict coverage metrics, aim for:
- Critical Paths: 100% coverage of core functionality
- API Endpoints: All endpoints should have integration tests
- Business Logic: Key operations (credits, jobs, teams) fully tested
- Error Handling: Test failure scenarios and edge cases
Next Steps¶
To improve the testing infrastructure:
- Add More Unit Tests:
- Test individual model classes
- Test utility functions
-
Test API endpoint handlers
-
Create Test Fixtures:
- Add
conftest.pywith reusable fixtures - Create database setup/teardown fixtures
-
Add test data generators
-
Add Coverage Reporting:
-
Set Up CI/CD:
- Automated test runs on push/PR
- Coverage reporting in CI
-
Test status badges
-
Add Performance Tests:
- Load testing for API endpoints
- Database query performance
- Concurrent request handling
Related Documentation¶
- Integration Tests - Detailed integration testing guide
- Troubleshooting - Common issues and solutions
- API Reference - API endpoint documentation