Back to Blog
API & Developer Guides

Cost Optimization Tips for the Easy Email Finder API

Published February 16, 2026

Every Credit Counts

At $0.25 per email, the Easy Email Finder API is already one of the most affordable options for email discovery. But when you are processing thousands of leads, small optimizations add up to significant savings. Here are proven strategies to get more value from every credit.

Tip 1: Use the Free Search Endpoint to Pre-Filter

The /search endpoint is free and returns business data including whether a website exists. Before enriching, filter out businesses without websites since they cannot yield an email.

import requests

API_KEY = "eef_live_your_api_key_here"
BASE = "https://easyemailfinder.com/api/v1"
HEADERS = {
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json"
}

# Free search
resp = requests.post(f"{BASE}/search", headers=HEADERS, json={
    "query": "dentists",
    "location": "Austin, TX",
    "mode": "local"
})
businesses = resp.json().get("results", [])

# Filter: only enrich businesses with websites
with_websites = [b for b in businesses if b.get("website")]
print(f"Total: {len(businesses)}, With websites: {len(with_websites)}")
print(f"Savings: {len(businesses) - len(with_websites)} credits not wasted")

# Now enrich only the filtered list
resp = requests.post(f"{BASE}/enrich-batch", headers=HEADERS, json={
    "websites": [b["website"] for b in with_websites[:20]]
})

In a typical search, 10-30% of results may not have websites. This simple filter saves those credits immediately.

Tip 2: Deduplicate Before Enriching

If you search across multiple locations, the same business chain might appear in multiple results. Normalize and deduplicate website URLs before enriching.

from urllib.parse import urlparse

def normalize_url(url: str) -> str:
    """Normalize a URL for deduplication."""
    parsed = urlparse(url.lower().strip())
    domain = parsed.netloc or parsed.path
    domain = domain.replace("www.", "")
    return domain.rstrip("/")

# Deduplicate websites across multiple searches
all_websites = [
    "https://www.example-dental.com",
    "http://example-dental.com/",
    "https://WWW.EXAMPLE-DENTAL.COM/contact",
]

unique = list(set(normalize_url(w) for w in all_websites))
print(f"Before: {len(all_websites)}, After dedup: {len(unique)}")
# Output: Before: 3, After dedup: 1

Tip 3: Cache Enrichment Results

If you enrich the same website twice, you pay twice. Maintain a local cache of previously enriched websites and their results.

import json
import os
import hashlib

CACHE_FILE = "enrichment_cache.json"

def load_cache() -> dict:
    if os.path.exists(CACHE_FILE):
        with open(CACHE_FILE, "r") as f:
            return json.load(f)
    return {}

def save_cache(cache: dict):
    with open(CACHE_FILE, "w") as f:
        json.dump(cache, f)

def enrich_with_cache(websites: list) -> list:
    cache = load_cache()
    uncached = []
    results = []

    for url in websites:
        key = hashlib.md5(normalize_url(url).encode()).hexdigest()
        if key in cache:
            results.append(cache[key])
        else:
            uncached.append(url)

    if uncached:
        # Only pay for websites not in cache
        resp = requests.post(f"{BASE}/enrich-batch", headers=HEADERS, json={
            "websites": uncached[:20]
        })
        new_results = resp.json().get("results", [])

        for r in new_results:
            key = hashlib.md5(normalize_url(r.get("website", "")).encode()).hexdigest()
            cache[key] = r
            results.append(r)

        save_cache(cache)

    print(f"Cache hits: {len(websites) - len(uncached)}, API calls: {len(uncached)}")
    return results

Tip 4: Filter Out Known Non-Business Domains

Some Google Places results link to social media pages, directory listings, or platform pages instead of actual business websites. These rarely yield useful emails.

SKIP_DOMAINS = {
    "facebook.com", "instagram.com", "twitter.com", "yelp.com",
    "yellowpages.com", "bbb.org", "linkedin.com", "tiktok.com",
    "pinterest.com", "thumbtack.com", "angi.com", "nextdoor.com"
}

def should_enrich(website: str) -> bool:
    domain = normalize_url(website)
    return not any(skip in domain for skip in SKIP_DOMAINS)

# Filter before enriching
enrichable = [w for w in websites if should_enrich(w)]
print(f"Skipped {len(websites) - len(enrichable)} non-business domains")

Tip 5: Use search-and-enrich Strategically

The /search-and-enrich endpoint is convenient but enriches all results automatically. If you only want high-rating businesses or specific types, the two-step approach (search, filter, then enrich) saves credits. See our search-and-enrich guide for details on when each approach is better.

# Two-step: search, filter by rating, then enrich
search_resp = requests.post(f"{BASE}/search", headers=HEADERS, json={
    "query": "dentists",
    "location": "Austin, TX",
    "mode": "local"
})
results = search_resp.json().get("results", [])

# Only enrich highly-rated businesses
high_quality = [r for r in results
    if r.get("website") and r.get("googleRating", 0) >= 4.0]

print(f"All results: {len(results)}, High-quality: {len(high_quality)}")
# Savings: only enrich the filtered set

Tip 6: Monitor Usage and Set Budgets

Use the free /usage and /balance endpoints to track consumption. Build budget alerts into your pipeline.

def check_budget(max_spend: float) -> bool:
    resp = requests.get(f"{BASE}/balance", headers=HEADERS)
    credits = resp.json().get("credits", 0)
    # Calculate how much you have already spent this session
    # (compare against initial balance)
    return True  # Add your budget logic here

Read our full guide on monitoring usage and balance.

Tip 7: Buy Credits in Bulk

Easy Email Finder uses a simple pay-as-you-go model with no volume discounts built into the API itself. However, buying a larger credit package upfront ensures you do not run out mid-pipeline and avoids the overhead of repeated small purchases.

Savings Summary

Combining all tips, here is the potential savings on a 1,000-website enrichment job:

  • Pre-filtering (no website): saves ~15% = 150 credits ($37.50)
  • Deduplication: saves ~5% = 50 credits ($12.50)
  • Domain filtering: saves ~8% = 80 credits ($20.00)
  • Caching (repeat runs): saves ~20% = 200 credits ($50.00)
  • Total potential savings: ~48% or $120 per 1,000 websites

Next Steps

These strategies work together. Combine them in your pipeline for maximum savings. For implementation details, see our batch enrichment guide or the rate limits best practices. Full API documentation is at easyemailfinder.com/developer/docs.

Ready to find business emails?

Try Easy Email Finder free — get 5 credits to start.

Start Finding Emails

Related Posts