API Tokens

How to use your token

API tokens start with sh- prefix. Pass in the X-API-Key header:

curl -H "X-API-Key: sh-your-token-here" \
  https://hub.brewcode.app/v1/audio/transcriptions \
  -F file=@audio.mp3 -F pipeline=full
LLM System Prompt for API access

Copy this prompt to teach an LLM to use the Speech Hub API:

You have access to the Speech Hub API at https://hub.brewcode.app.

FIRST, fetch https://hub.brewcode.app/llm.txt — it is a compact LLM-friendly reference listing every endpoint, scope, model, and chat-mode option with canonical examples. Use it as your primary source before answering anything about the API.

Auth: pass your API key (starts with sh-) as `X-API-Key: sh-...` on every request.

Content format: `messages[].content` accepts EITHER a plain string OR an OpenAI-compatible array of text parts `[{"type":"text","text":"..."}]`. Both forms work identically — arrays are concatenated server-side (joined with \n). Non-text part types (image_url, etc.) return HTTP 422.

Message roles: `system` | `user` | `assistant` | `tool`. Tool messages (role=`tool`) are replies to assistant `tool_calls` — they require `tool_call_id` matching the assistant's call id (HTTP 422 otherwise). Optional `name` field accepted on any message. Legacy `function` role is NOT supported.

Chat completions support these optional modes (all default OFF, opt-in per request):
  think — enable model reasoning (~10–30 s vs ~500 ms baseline). vLLM models only; qwen2.5:7b returns 400.
  tools + tool_choice — OpenAI-style function calling. tool_choice: "auto" | "required" | "none" | {"type":"function","function":{"name":"..."}}
  parallel_tool_calls — allow several tool calls in one turn (vLLM only).
  response_format — {"type":"json_object"} or {"type":"json_schema","json_schema":{...}}. json_schema is vLLM-only (Ollama returns 400).

Response surface: `choices[0].message.reasoning` and `choices[0].message.tool_calls` appear when the respective modes are active. `finish_reason` may be "tool_calls".

LAZY references (fetch only if llm.txt leaves a question open):
  https://hub.brewcode.app/openapi.yaml — full OpenAPI 3.1 spec
  https://hub.brewcode.app/docs — interactive Scalar UI

Never guess endpoints or fields — rely on llm.txt first, OpenAPI second.

Create Token

Permissions define what the token can access. Select individual scopes or use group checkboxes.