Skip to main content

Rate Limits & Tokens

The Json2doc API uses both rate limiting and token-based usage tracking to ensure fair usage and maintain service quality. This page explains how both systems work and how to optimize your usage.

Current Limits (Free Plan)

Since Json2doc is currently available only on the Free Plan, here are the current limits:

Rate Limits

Rate Limit TypeFree Plan LimitWindow
API Requests300 requests15 minutes
Authentication5 attempts15 minutes
File Uploads5 uploads1 minute
Job Creation5 jobs1 minute
API Key Creation3 keys1 hour
Concurrent Jobs5 jobs-

Document & Processing Limits

Limit TypeFree Plan LimitDescription
File Size100 MBMaximum size per uploaded file (templates, images)
File TTL (Time to Live)1-48 hoursFiles are automatically deleted after TTL expires (default: 1 hour)
Document Pages100 pagesMaximum pages per document
Document Elements1,000 elementsMaximum content elements (text, images, tables) per document
Tables per Document50 tablesMaximum tables per document
Images per Document100 imagesMaximum images per document
Template Variables50 variablesMaximum variables per template
Document Builder Sessions10 active sessionsMaximum concurrent document builder sessions
Sections per Document50 sectionsMaximum sections in document mode

Supported File Types

CategorySupported FormatsDescription
TemplatesDOCXWord document templates with variable placeholders
ImagesPNG, JPEG, JPG, GIF, WebPImages for insertion into documents

File Storage & TTL (Time to Live)

Json2doc uses a temporary file storage system to ensure efficient resource usage:

  • Default TTL: All uploaded files expire after 1 hour by default
  • Configurable TTL: You can extend the TTL up to 48 hours during upload
  • Automatic Cleanup: Files are automatically deleted once their TTL expires
  • Use Case: Perfect for document generation workflows where templates and assets are only needed temporarily

Specifying TTL during upload:

curl -X POST "https://api.json2doc.com/v1/files/upload" \
-H "x-api-key: YOUR_API_KEY" \
-F "file=@template.docx" \
-F "category=template" \
-F "ttlHours=24"
File Expiration

Once a file expires and is deleted, it cannot be recovered. Plan your workflows accordingly and ensure you download results before expiration.

Custom Plans Available

Need higher limits or custom quotas? Contact us through the dashboard contact form to discuss custom plans tailored to your needs.

Token System

Json2doc uses a token-based system to track and bill for processing resources. Tokens are consumed for various operations:

Token Pricing

OperationToken CostCalculation Method
File Upload1 tokenFixed cost per file
Template Processing5 tokens per pageBased on template page count
Document Processing10 tokens per pageBased on document page count
Document Builder SessionSee Document ProcessingBased on final page count

Token Allocation

PlanMonthly TokensPricing
Free Plan1,000 tokensFree
Custom PlansVariableContact sales

Detailed Token Calculation

Template Jobs

  • Per page: 5 tokens (TOKENS_PER_TEMPLATE_PAGE)
  • Formula: page_count × TOKENS_PER_TEMPLATE_PAGE
  • Example: 3-page template = 3 × 5 = 15 tokens

Document Jobs

  • Per page: 10 tokens (TOKENS_PER_DOCUMENT_PAGE)
  • Formula: page_count × TOKENS_PER_DOCUMENT_PAGE
  • Example: 5-page document = 5 × 10 = 50 tokens
Token Charging

Tokens are only charged upon successful job completion. If a job fails, no tokens are consumed (except the 1 token for file uploads already used).

How Limits Work

Rate Limiting

Rate limits are applied per API key and use a sliding window algorithm:

  • Request Rate Limit: Maximum number of API requests per time window (RATE_LIMIT_MAX per RATE_LIMIT_WINDOW)
  • Concurrent Jobs: Maximum number of jobs processing simultaneously (MAX_CONCURRENT_JOBS_PER_USER)
  • Time-based Windows: Different limits for different operations with configurable windows

Processing Limits

Processing limits are enforced during job validation to ensure system stability:

  • File Size Validation: Checked during upload (100MB max)
  • Page Limits: depends on plan
  • Element Limits: depends on plan
  • Complexity Limits: Prevents overly complex documents that could impact performance

Rate Limit Headers

Every API response includes headers showing your current rate limit status:

HTTP/1.1 200 OK
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 999
X-RateLimit-Reset: 1642584000
X-RateLimit-Window: 900
HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the current window
X-RateLimit-RemainingNumber of requests remaining in current window
X-RateLimit-ResetUnix timestamp when the rate limit resets
X-RateLimit-WindowRate limit window duration in seconds

Handling Rate Limits

429 Too Many Requests

When you exceed the rate limit, the API returns a 429 status code:

{
"success": false,
"error": "Too many requests, please try again later"
}

Monitoring Your Usage

Token Usage Tracking

Monitor your token consumption through the API:

curl -H "x-api-key: YOUR_API_KEY" \
https://api.json2doc.com/v1/auth/stats

Response includes current token usage:

{
"success": true,
"data": {
"tokensUsed": 1250,
"tokensRemaining": 8750,
"resetDate": "2024-02-01T00:00:00Z"
}
}

Best Practices

1. Implement Exponential Backoff

async function apiRequest(url, options, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
const response = await fetch(url, options);

if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After') || Math.pow(2, i);
await sleep(retryAfter * 1000);
continue;
}

return response;
} catch (error) {
if (i === maxRetries - 1) throw error;
await sleep(Math.pow(2, i) * 1000);
}
}
}

function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}

2. Monitor Rate Limit Headers

function checkRateLimit(response) {
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const reset = parseInt(response.headers.get('X-RateLimit-Reset'));

if (remaining < 10) {
const waitTime = (reset * 1000) - Date.now();
console.warn(`Rate limit low. ${remaining} requests remaining. Reset in ${waitTime}ms`);
}
}

3. Optimize File Uploads

Group file uploads when possible to reduce API calls:

// Upload files efficiently
const uploadFiles = async (files) => {
const uploads = [];

for (const file of files) {
uploads.push(uploadFile(file));
}

// Wait for all uploads with proper error handling
const results = await Promise.allSettled(uploads);
return results;
};

4. Efficient Job Status Monitoring

Use smart polling with exponential backoff:

const pollJobStatus = async (jobId) => {
let delay = 2000; // Start with 2 seconds
const maxDelay = 30000; // Cap at 30 seconds

while (true) {
const job = await getJob(jobId);

if (job.status === 'COMPLETED' || job.status === 'FAILED') {
return job;
}

await sleep(delay);
delay = Math.min(delay * 1.5, maxDelay); // Exponential backoff
}
};

1. Use Templates for Repetitive Documents

Templates use fewer tokens (5 per page vs 10 per page for documents):

// For invoices, reports, certificates
const templateConfig = {
templateId: 'invoice-template',
variables: {
customerName: 'John Doe',
invoiceNumber: 'INV-001',
amount: '$1,234.56'
}
};

3. Validate Before Processing

Use the validation endpoint to catch errors early and avoid hitting limits:

// Validate job configuration first
const validation = await fetch('/api/v1/jobs/validate', {
method: 'POST',
headers: { 'x-api-key': API_KEY },
body: JSON.stringify({ type: 'document', config: jobConfig })
});

const validationResult = await validation.json();

Troubleshooting Common Issues

Rate Limit Exceeded

  1. Check your current usage in the dashboard
  2. Implement exponential backoff in your code
  3. Consider upgrading to a custom plan

Token Depletion

  1. Monitor your monthly token usage
  2. Optimize job configurations to use fewer tokens
  3. Consider smaller output dimensions for appropriate use cases
Tool Usage Tokens

Synchronous document tools (merge, compress, remove/keep pages, split, watermark, rotate, protect, DOCX→PDF) cost 25 tokens per call.

Job Processing Delays

  1. Check concurrent job limits (currently 5 jobs)
  2. Monitor current queue status
  3. Implement efficient polling strategies

File Upload Failures

  1. File too large: Ensure files are under 100MB
  2. Unsupported format: Check supported file types (DOCX templates, PNG/JPEG/GIF/WebP images)
  3. TTL out of range: Ensure ttlHours parameter is between 1 and 48

File Access Issues

  1. File not found (410 Gone): File has expired and been automatically deleted
  2. Invalid file reference: Check that file IDs are correct and files haven't expired
  3. Job configuration errors: Ensure referenced files in job configs haven't expired

Job Validation Errors

  1. Too many pages: Reduce document to 100 pages or less
  2. Too many elements: Reduce elements to 1,000 or less per document
  3. Too many tables: Reduce to 50 tables or less per document
  4. Too many images: Reduce to 100 images or less per document