FAQ
Why use TokenDock?
TokenDock focuses on a carefully selected set of smart, cost-effective models rather than trying to cover every model on the market. For the models we do provide, our goal is to deliver real value through better pricing while maintaining reliable OpenAI-compatible access.
By sharing volume-discount benefits across customers, TokenDock helps users access strong models at lower prices even when they are just experimenting or starting small.
Do you support the OpenAI SDK?
Yes. Set base_url to https://tokendock.ai/v1 and use your TokenDock API key as the api_key. No other changes needed.
Do you support streaming responses?
Yes. Pass "stream": true in your request body. Responses are delivered as server-sent events in the same format as the OpenAI API.
Can I use multiple API keys?
Yes. Create multiple keys from the API Keys page and use different keys for different projects or environments. All keys share the same credit balance.
What happens when my balance runs out?
API calls return HTTP 402 (Payment Required) when your balance is zero. Top up from the Billing page to resume. Requests are not queued.
Are there rate limits?
Rate limits depend on the model and your account tier. If you hit a limit, the API returns HTTP 429. Implement exponential back-off and retry logic in your application.
How do I dispute a charge?
Every API call is logged with the request ID, model, token counts, and cost. Contact support with your request ID and we can retrieve the full log for any disputed transaction. Email us at [email protected].
Which regions are supported?
TokenDock currently supports US, Singapore, and Germany regions. If you need a specific region for data residency, contact support.