Rate Limits

All Bot API requests are rate-limited per token using a token bucket algorithm. Stay within limits to avoid 429 responses.

How It Works

Each bot token has a token bucket per interface and one global bucket across all interfaces. Each request consumes one token. Tokens replenish at a fixed rate.

Bucket capacity — maximum number of tokens the bucket can hold. This is your burst allowance.

Replenishment — tokens added to the bucket at regular intervals, up to the capacity.

Global Limit

Applied across all endpoints combined, per bot token.

Bucket Replenish Period
30 tokens +30/period 1s

Per-Interface Limits

Each interface has its own bucket. These are checked in addition to the global limit.

Interface Bucket Replenish Period
IMessages 20 +20 1m
ISpaces 5 +5 1m

Handling 429 Responses

When a bucket is empty, the server returns 429 Too Many Requests.

Check the Retry-After header — it tells you how many seconds to wait before retrying.

Use exponential backoff — if you're consistently hitting limits, increase the delay between requests exponentially.

Queue non-urgent work — batch operations when possible instead of sending many small requests.

Verified Bot Limits

Verified bots receive higher rate limits automatically. There is no separate API — once your bot is verified, limits are increased server-side.