Rate limits
The Swappr API enforces per-API-key rate limits to protect platform stability. Every response includes X-RateLimit-* headers so you can pace your client.
Limits by tier
| Tier | Endpoints | Limit |
|---|---|---|
| Read | All GET endpoints | 120 requests / minute |
| Write | POST / PATCH / DELETE (single-record) | 30 requests / minute |
| Bulk | POST /v1/batches only | 10 requests / minute |
| Anon | Unauthenticated requests | 10 requests / minute (per IP) |
Each tier has an independent budget — read calls don’t deplete your write budget.
Response headers
Every response (including 429s) includes:
X-RateLimit-Limit: 120
X-RateLimit-Remaining: 117
X-RateLimit-Reset: 1715000000X-RateLimit-Limit— total budget for this tier in the current windowX-RateLimit-Remaining— calls left in the current windowX-RateLimit-Reset— Unix timestamp (seconds) when the window resets
When you exceed the limit, we return 429 Too Many Requests with an additional Retry-After header (seconds):
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 30
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1715000045
Retry-After: 12
{
"error": {
"type": "rate_limit_error",
"code": "rate_limit_exceeded",
"message": "Rate limit exceeded. Maximum 30 requests per minute for write endpoints. Retry after 12s.",
"detail": {
"tier": "write",
"limit": 30,
"retry_after_seconds": 12
}
}
}Backoff strategy
Use exponential backoff with jitter, anchored on the Retry-After header:
async function callWithBackoff(url: string, init: RequestInit) {
for (let attempt = 1; attempt <= 5; attempt++) {
const res = await fetch(url, init);
if (res.status !== 429) return res;
const retryAfter = parseInt(res.headers.get('retry-after') ?? '60', 10);
const jitter = Math.random() * 1000;
await sleep(retryAfter * 1000 + jitter);
}
throw new Error('Rate limit retries exhausted');
}Window model
We use a fixed 1-minute window per tier per key. The window resets at the timestamp shown in X-RateLimit-Reset. Bursting up to the limit at the start of a window then waiting is fine.
The window is per-API-key, not per-merchant. If you have multiple keys (e.g. one per service in your backend), each gets its own budget. Generate separate keys per workload — payroll, ad-hoc payments, refunds — to isolate them.
Need higher limits?
Email support@the-technest.com with:
- Your merchant ID + key prefix (e.g.
sk_live_aaaa…) - Expected peak QPS
- Justification (e.g. weekly payroll for 50k recipients)
We’ll bump the per-key limit on case-by-case basis. We don’t publish higher tiers because they need ops review.
Edge cases
- Bulk endpoints share their own bucket —
POST /v1/batcheshas its own 10/min budget independent of write endpoints. Calling it doesn’t depletePOST /v1/payoutsbudget. - 5xx + 429 don’t deduct from your budget — only successful + 4xx requests are counted.
- Auth failures (401, 403) deduct from anon bucket — not from your key’s budget. So a leaked key being probed by an attacker won’t lock out your legitimate traffic.
- Cron / scheduled work — staggered start times across your fleet to avoid cliff-spikes at the top of the minute.