Back to Blog
API & Developer Guides

Easy Email Finder API Rate Limits and Best Practices

Published February 6, 2026

Understanding Rate Limits

The Easy Email Finder API enforces rate limits to ensure fair usage and reliable performance for all users. Understanding these limits is essential for building integrations that work smoothly without interruptions.

Current Rate Limits

  • Default endpoints (/search, /balance, /usage): 60 requests per minute
  • Enrich endpoints (/enrich, /enrich-batch, /search-and-enrich): 10 requests per minute

These limits are per API key. If you hit a rate limit, the API returns a 429 Too Many Requests status code with a Retry-After header indicating how many seconds to wait.

Rate Limit Response

HTTP/1.1 429 Too Many Requests
Retry-After: 30
Content-Type: application/json

{
  "error": "Rate limit exceeded",
  "retryAfter": 30
}

Best Practice 1: Implement Exponential Backoff

Never retry immediately after a 429 response. Instead, implement exponential backoff that respects the Retry-After header.

import time
import requests

def api_call_with_backoff(url, headers, json_data, max_retries=5):
    for attempt in range(max_retries):
        resp = requests.post(url, headers=headers, json=json_data)

        if resp.status_code == 200:
            return resp.json()

        if resp.status_code == 429:
            retry_after = int(resp.headers.get("Retry-After", 30))
            wait_time = max(retry_after, 2 ** attempt)
            print(f"Rate limited. Waiting {wait_time}s (attempt {attempt + 1})")
            time.sleep(wait_time)
            continue

        resp.raise_for_status()

    raise Exception("Max retries exceeded")

Best Practice 2: Use Batch Endpoints

Instead of calling /enrich once per website, use /enrich-batch to process up to 20 websites in a single request. This uses only 1 of your 10 enrich requests per minute but processes 20 websites.

# Bad: 20 individual calls (uses 20 of your 10/min limit = rate limited)
for website in websites:
    requests.post(f"{BASE}/enrich", headers=headers, json={"website": website})

# Good: 1 batch call (uses 1 of your 10/min limit)
requests.post(f"{BASE}/enrich-batch", headers=headers, json={
    "websites": websites[:20]
})

This is the single most impactful optimization you can make. A list of 100 websites requires 100 individual enrich calls (impossible within rate limits without long waits) but only 5 batch calls (easily fits within limits).

Best Practice 3: Cache Search Results

The /search endpoint is free but still rate-limited to 60 requests per minute. If you are repeatedly searching for the same keyword-location pairs, cache the results locally.

import json
import hashlib
import os
from datetime import datetime, timedelta

CACHE_DIR = "./search_cache"
CACHE_TTL = timedelta(hours=24)

def cached_search(query, location, mode="local"):
    cache_key = hashlib.md5(f"{query}:{location}:{mode}".encode()).hexdigest()
    cache_file = os.path.join(CACHE_DIR, f"{cache_key}.json")

    # Check cache
    if os.path.exists(cache_file):
        stat = os.stat(cache_file)
        age = datetime.now() - datetime.fromtimestamp(stat.st_mtime)
        if age < CACHE_TTL:
            with open(cache_file) as f:
                return json.load(f)

    # Make API call
    resp = requests.post(f"{BASE}/search", headers=HEADERS, json={
        "query": query, "location": location, "mode": mode
    })
    data = resp.json()

    # Save to cache
    os.makedirs(CACHE_DIR, exist_ok=True)
    with open(cache_file, "w") as f:
        json.dump(data, f)

    return data

Best Practice 4: Space Out Enrichment Calls

With a limit of 10 enrich requests per minute, aim for roughly one batch call every 6-7 seconds to stay comfortably within limits.

import time

ENRICH_INTERVAL = 7  # seconds between batch calls

for i in range(0, len(websites), 20):
    batch = websites[i:i+20]
    results = enrich_batch(batch)
    process_results(results)

    if i + 20 < len(websites):
        time.sleep(ENRICH_INTERVAL)

Best Practice 5: Monitor Usage Proactively

Use the free /usage and /balance endpoints to monitor your consumption. Build alerts into your pipeline that pause processing when credits run low.

def check_credits_before_batch(batch_size):
    resp = requests.get(f"{BASE}/balance", headers=HEADERS)
    credits = resp.json().get("credits", 0)

    if credits < batch_size:
        print(f"Low credits ({credits}). Need {batch_size}. Stopping.")
        return False
    return True

Read our detailed guide on monitoring your API usage for more advanced patterns.

Best Practice 6: Handle Errors Gracefully

Beyond rate limits, handle other HTTP error codes appropriately:

  • 400 Bad Request: Check your request payload format
  • 401 Unauthorized: Your API key is invalid or expired
  • 402 Payment Required: Insufficient credits
  • 429 Too Many Requests: Rate limited, back off and retry
  • 500 Internal Server Error: Retry with backoff

Node.js Rate Limiter Example

import PQueue from 'p-queue';

// 1 request every 7 seconds for enrich endpoints
const enrichQueue = new PQueue({
  concurrency: 1,
  interval: 7000,
  intervalCap: 1
});

// 1 request per second for search endpoints
const searchQueue = new PQueue({
  concurrency: 1,
  interval: 1000,
  intervalCap: 1
});

// Usage
const results = await enrichQueue.add(() => enrichBatch(websites));
const businesses = await searchQueue.add(() => searchBusinesses(query, loc));

Summary

Building a reliable integration comes down to three principles: use batch endpoints, implement backoff, and monitor your usage. Follow these practices and you will never hit unexpected rate limit issues. For more optimization tips, see our cost optimization guide. Full API details are at easyemailfinder.com/developer/docs.

Ready to find business emails?

Try Easy Email Finder free — get 5 credits to start.

Start Finding Emails

Related Posts