OpenAI-Compatible API
Connect to any API that follows the OpenAI API specification.
Configuration
| Field | Required | Example |
|---|---|---|
| Application Name | Yes | my-openai-compatible-app |
| Base URL | Yes | https://api.example.com/v1 |
| Model Name | Yes | custom-model-name |
| API Key | Yes | Your API key |
Use Cases
- Self-hosted models (vLLM, Text Generation Inference)
- Local models (Ollama, LM Studio)
- Custom model deployments
- Third-party OpenAI-compatible services
Example Configurations
vLLM Server:
Base URL: http://localhost:8000/v1
Model Name: meta-llama/Llama-3-8b-chat-hf
Ollama:
Base URL: http://localhost:11434/v1
Model Name: llama3
LM Studio:
Base URL: http://localhost:1234/v1
Model Name: local-model
Advanced Parameters
| Parameter | Type | Description |
|---|---|---|
max_tokens | integer | Maximum tokens |
temperature | float | Randomness |
| Additional parameters | varies | Passed through to the endpoint |
Setup Steps
- Navigate to AI Applications → New Application
- Select Model Providers tab
- Click OpenAI-Compatible API
- Enter your Application Name
- Enter your Base URL (the endpoint of your OpenAI-compatible server)
- Enter your Model Name
- Enter your API Key (if required by your server)
- Configure Advanced Settings (optional)
- Click Test Response to verify (optional)
- Review and submit