API Reference
OpenAI-compatible. Drop-in replacement, zero code changes.
Once the engine is running, three OpenAI-compatible services are exposed. Set base_url to the corresponding port; api_key can be any string.
Service Overview
| Service | Port | Endpoint |
|---|---|---|
| VLM | 8080 | POST /v1/chat/completions |
| Embedding | 8081 | POST /v1/embeddings |
| STT | 8082 | POST /inference |
| Health | all | GET /health |
1. Vision-Language (VLM)
💡 Gemma 4 is a thinking model: it writes reasoning before content. Use max_tokens >= 256 to avoid truncation.
Python (openai SDK)
from openai import OpenAI
client = OpenAI(base_url="http://127.0.0.1:8080/v1", api_key="not-needed")
# Text only
resp = client.chat.completions.create(
model="gemma-4-e4b",
messages=[{"role": "user", "content": "Hello"}],
max_tokens=256,
)
# With image
import base64
img_b64 = base64.b64encode(open("photo.jpg", "rb").read()).decode()
resp = client.chat.completions.create(
model="gemma-4-e4b",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "Describe this photo"},
{"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{img_b64}"}},
],
}],
max_tokens=256,
) curl
curl http://127.0.0.1:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gemma-4-e4b",
"messages": [{"role": "user", "content": "Hello"}],
"max_tokens": 256
}' JavaScript
const resp = await fetch("http://127.0.0.1:8080/v1/chat/completions", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
model: "gemma-4-e4b",
messages: [{ role: "user", content: "Hello" }],
max_tokens: 256,
}),
});
const data = await resp.json(); 2. Text Embedding
Python
from openai import OpenAI
client = OpenAI(base_url="http://127.0.0.1:8081/v1", api_key="not-needed")
# Single
resp = client.embeddings.create(model="bge-base-zh", input="Sample text")
vec = resp.data[0].embedding # list[float], len == 768
# Batch
texts = ["Text 1", "Text 2", "Text 3"]
resp = client.embeddings.create(model="bge-base-zh", input=texts)
vectors = [d.embedding for d in resp.data] # 3 x 768 curl
curl http://127.0.0.1:8081/v1/embeddings \
-H "Content-Type: application/json" \
-d '{"model":"bge-base-zh","input":"Sample text"}' 3. Speech-to-Text (STT)
ℹ️ whisper.cpp uses native multipart, NOT the OpenAI Audio API.
curl
curl http://127.0.0.1:8082/inference \
-F "file=@audio.wav" \
-F "language=zh" \
-F "response_format=json"
# Response: {"text": "transcribed text..."} Python (requests)
import requests
with open("audio.wav", "rb") as f:
r = requests.post(
"http://127.0.0.1:8082/inference",
files={"file": f},
data={"language": "zh", "response_format": "json"},
)
print(r.json()) Limitations
- Bind IP: 127.0.0.1 only — no remote access
- Auth: none (same-host trust model)
- Concurrency: 1 (llama-server single slot, requests queue)
- Context: VLM unlimited by default; Embedding 512 tokens