Skip to main content

OpenAI-Compatible API

Connect to any API that follows the OpenAI API specification.

Configuration

FieldRequiredExample
Application NameYesmy-openai-compatible-app
Base URLYeshttps://api.example.com/v1
Model NameYescustom-model-name
API KeyYesYour API key

Use Cases

  • Self-hosted models (vLLM, Text Generation Inference)
  • Local models (Ollama, LM Studio)
  • Custom model deployments
  • Third-party OpenAI-compatible services

Example Configurations

vLLM Server:

Base URL: http://localhost:8000/v1
Model Name: meta-llama/Llama-3-8b-chat-hf

Ollama:

Base URL: http://localhost:11434/v1
Model Name: llama3

LM Studio:

Base URL: http://localhost:1234/v1
Model Name: local-model

Advanced Parameters

ParameterTypeDescription
max_tokensintegerMaximum tokens
temperaturefloatRandomness
Additional parametersvariesPassed through to the endpoint

Setup Steps

  1. Navigate to AI Applications → New Application
  2. Select Model Providers tab
  3. Click OpenAI-Compatible API
  4. Enter your Application Name
  5. Enter your Base URL (the endpoint of your OpenAI-compatible server)
  6. Enter your Model Name
  7. Enter your API Key (if required by your server)
  8. Configure Advanced Settings (optional)
  9. Click Test Response to verify (optional)
  10. Review and submit