Referencia API
Compatible con OpenAI. Reemplazo directo, sin cambios de código.
Una vez iniciado el motor, los tres servicios compatibles con OpenAI están expuestos. Configura base_url al puerto correspondiente; api_key puede ser cualquier cadena.
Resumen de servicios
| Service | Port | Endpoint |
|---|---|---|
| VLM | 8080 | POST /v1/chat/completions |
| Embedding | 8081 | POST /v1/embeddings |
| STT | 8082 | POST /inference |
| Health | all | GET /health |
1. Visión-Lenguaje (VLM)
💡 Gemma 4 es un thinking model: escribe razonamiento antes del content. Usa max_tokens >= 256 para evitar truncación.
Python (openai SDK)
from openai import OpenAI
client = OpenAI(base_url="http://127.0.0.1:8080/v1", api_key="not-needed")
# Text only
resp = client.chat.completions.create(
model="gemma-4-e4b",
messages=[{"role": "user", "content": "Hello"}],
max_tokens=256,
)
# With image
import base64
img_b64 = base64.b64encode(open("photo.jpg", "rb").read()).decode()
resp = client.chat.completions.create(
model="gemma-4-e4b",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "Describe this photo"},
{"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{img_b64}"}},
],
}],
max_tokens=256,
) curl
curl http://127.0.0.1:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gemma-4-e4b",
"messages": [{"role": "user", "content": "Hello"}],
"max_tokens": 256
}' JavaScript
const resp = await fetch("http://127.0.0.1:8080/v1/chat/completions", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
model: "gemma-4-e4b",
messages: [{ role: "user", content: "Hello" }],
max_tokens: 256,
}),
});
const data = await resp.json(); 2. Embeddings de texto
Python
from openai import OpenAI
client = OpenAI(base_url="http://127.0.0.1:8081/v1", api_key="not-needed")
# Single
resp = client.embeddings.create(model="bge-base-zh", input="Sample text")
vec = resp.data[0].embedding # list[float], len == 768
# Batch
texts = ["Text 1", "Text 2", "Text 3"]
resp = client.embeddings.create(model="bge-base-zh", input=texts)
vectors = [d.embedding for d in resp.data] # 3 x 768 curl
curl http://127.0.0.1:8081/v1/embeddings \
-H "Content-Type: application/json" \
-d '{"model":"bge-base-zh","input":"Sample text"}' 3. Voz a texto (STT)
ℹ️ whisper.cpp usa multipart nativo, NO la API de Audio de OpenAI.
curl
curl http://127.0.0.1:8082/inference \
-F "file=@audio.wav" \
-F "language=zh" \
-F "response_format=json"
# Response: {"text": "transcribed text..."} Python (requests)
import requests
with open("audio.wav", "rb") as f:
r = requests.post(
"http://127.0.0.1:8082/inference",
files={"file": f},
data={"language": "zh", "response_format": "json"},
)
print(r.json()) Limitaciones
- Bind IP: solo 127.0.0.1 — sin acceso remoto
- Auth: ninguna (modelo de confianza same-host)
- Concurrencia: 1 (llama-server slot único, las peticiones hacen cola)
- Context: VLM ilimitado por defecto; Embedding 512 tokens