I Let ChatGPT Write 100 Cold Emails — Here Is What Actually Happened
Published February 28, 2026
The Experiment
Everyone has an opinion about using AI to write cold emails. Some swear by it. Others say it produces robotic, detectable copy that kills reply rates. I decided to find out with an actual test, not an opinion.
The setup was simple. I built a list of 100 local businesses across five industries (dentists, restaurants, auto repair shops, salons, and HVAC companies) using Easy Email Finder. For each business, I had the email address, Google rating, review count, website URL, and business category. I fed all of this context to ChatGPT-4 and asked it to write a unique, personalized cold email for each business. No human editing. No cherry-picking. Whatever ChatGPT wrote is what I sent.
Then I sent all 100 emails from a properly warmed domain with clean authentication and tracked every metric: opens, replies, bounce rate, and spam complaints.
What I Gave ChatGPT
For each business, ChatGPT received:
- The business name and industry
- The city and state
- Their Google rating and review count
- Their website URL
- The name of the person (when available from the website)
- My offer: web design services for local businesses
- Instructions to keep each email under 100 words, reference specific details about the business, and ask a low-friction question
I used a consistent system prompt that told ChatGPT to follow the 75-word cold email framework: specific observation, relevant insight, one credibility point, low-friction CTA.
The Results
After two weeks (including follow-ups), here are the numbers:
- Emails sent: 100
- Bounced: 3 (3% bounce rate — the email list quality was solid)
- Opened: 51 of 97 delivered (52.6% open rate)
- Replied: 9 (9.3% reply rate)
- Positive replies: 5 (5.2% positive reply rate)
- Meetings booked: 3
- Spam complaints: 1
A 9.3% reply rate with 5.2% positive replies is above average for cold email. Most campaigns targeting local businesses with templated outreach see 3-6% total reply rates. So on a pure numbers basis, ChatGPT outperformed a generic template approach.
But the numbers only tell part of the story.
What ChatGPT Did Well
Personalization Speed
Writing 100 genuinely personalized emails would take a human 8-15 hours. ChatGPT produced all 100 in under 30 minutes (including my time formatting prompts and reviewing outputs). Even accounting for review time, the efficiency gain is 10-15 times faster than manual writing.
Observation Quality
ChatGPT was surprisingly good at turning raw data into natural-sounding observations. Given a dentist with a 4.8 rating and 230 reviews, it wrote: "Your 4.8-star rating across 230 reviews says a lot about the care you give patients — that does not happen by accident." This feels genuine and specific. It does not read like AI.
The key was giving ChatGPT specific data points to work with. When I gave it just a business name and industry, the output was generic and useless. When I gave it rating, review count, website URL, and location, the output improved dramatically. Data quality in, email quality out.
Variety
Across 100 emails, ChatGPT avoided obvious repetition. No two opening lines were identical. It varied its sentence structures, transitions, and CTAs. This matters because email providers can detect and penalize near-identical emails sent from the same account.
What ChatGPT Got Wrong
The "AI Smell"
About 15 of the 100 emails had a quality I call "AI smell" — they were technically correct but felt slightly off. Common tells included:
- Excessive positivity: "Your dedication to providing exceptional automotive service really shines through in your stellar online presence." No human writes like this in a cold email.
- Hollow compliments: Praising a business's "commitment to excellence" or "passion for their craft" without citing anything specific. These feel like AI filler words.
- Perfect grammar in casual context: Real cold emails have a slightly informal tone. ChatGPT defaults to grammatically perfect prose that feels stilted in an inbox.
- Feature listing: Despite instructions to keep things short, ChatGPT occasionally slipped into listing features instead of talking about outcomes.
Factual Fabrication
This was the most concerning issue. In 4 of 100 emails, ChatGPT fabricated details I did not provide. One email referenced a "recently launched loyalty program" that the restaurant did not have. Another mentioned "your new location" when the business had only one location. ChatGPT filled gaps in its knowledge by making things up, and it did so confidently.
If a prospect reads your email and immediately spots a false claim about their business, you have lost all credibility. You go from "thoughtful outreach" to "this person did not even look at my business" instantly. This is a deal-breaker for fully automated AI email campaigns.
Missing Emotional Intelligence
One auto repair shop had a 3.1 Google rating. ChatGPT still wrote an upbeat email referencing their "solid customer feedback." A human would recognize that a 3.1 rating is a problem, not a compliment, and would either skip the business or frame the outreach around reputation improvement. ChatGPT lacks the contextual judgment to make these calls.
The 5 Replies That Did Not Convert
Nine people replied. Five were positive ("tell me more"), three were polite declines, and one was angry ("stop emailing me"). Of the five positive replies, I converted three to meetings. The two that did not convert both raised specific questions that required nuanced, consultative responses. I handled these myself — an AI followup would have likely lost them.
This reinforces the hybrid AI-human model: AI generates the initial outreach, humans handle the responses that require judgment.
Lessons for Using AI to Write Cold Emails
Lesson 1: Data Quality Is Everything
ChatGPT's output quality directly correlates with the input data quality. Feed it a name and email and you get generic slop. Feed it a business name, rating, review count, website URL, industry, and location and you get genuinely personalized output. Invest in good data sources before investing in AI writing tools.
Lesson 2: Always Review Before Sending
The factual fabrication problem means you cannot fully trust AI output. Budget 30-60 seconds per email for a human review pass. This catches the 5-15% of emails that contain false claims, tone-deaf compliments, or "AI smell." At 60 seconds per email, reviewing 100 emails takes less than two hours — still a massive time savings over writing from scratch.
Lesson 3: Edit for "AI Smell"
Train yourself to spot AI-typical phrases and replace them. "Your commitment to excellence" becomes "your reviews on Google speak for themselves." "I was really impressed by" becomes "I noticed." "Exceptional" becomes "solid." Strip the superlatives. Lower the enthusiasm to a professional simmer. Real cold emails are understated, not enthusiastic.
Lesson 4: Use a Proven Framework
ChatGPT performs best when you give it a clear structure to follow. The 75-word framework (observation, insight, credibility, CTA) constrains the AI's output and prevents it from rambling or adding unnecessary sections.
Lesson 5: AI for First Email, Human for Replies
The clearest finding from this experiment: AI is good at initiating conversations but bad at sustaining them. Use AI to write and send your initial email and first follow-up. Once a prospect replies, have a human take over. The combination produces better results than either alone.
The Template I Would Use Again
Here is the system prompt that produced the best results. You can use this directly with ChatGPT-4 or any similar model:
"Write a cold email under 80 words using this framework: Line 1 references a specific detail about the business (use the data I provide). Line 2 connects that detail to an outcome or opportunity. Line 3 shares one relevant result from a similar client. Line 4 asks one low-friction question. Tone: professional, direct, slightly informal. Do not use superlatives. Do not fabricate details. If you do not have enough data for Line 1, start with a relevant industry insight instead."
Would I Do It Again?
Yes — with human review. The efficiency gain is too significant to ignore. Writing and personalizing 100 cold emails in under an hour (including review) versus 8-15 hours manually is a game-changer for solo founders and small teams.
But I would never send AI-written emails without reviewing them first. The fabrication risk and occasional tone-deafness make fully automated sends too risky for a brand I care about. The sweet spot is AI-generated first draft, human review, human follow-up on replies.
Start by building a data-rich prospect list with Easy Email Finder — the more data points you have per prospect, the better AI performs. Then feed that data into your AI writing workflow and edit the output before sending. For more AI outreach strategies, read our guides on personalizing 1,000 emails with AI and the AI outreach stack.
Ready to find business emails?
Try Easy Email Finder free — get 5 credits to start.
Start Finding Emails