Select Page

The Myth of the Universal “Good” Open Rate: Why Averages Lie

Introduction – The Question Every Marketer Asks

I’ve been in rooms—boardrooms, Slack huddles, post-mortem calls—where this question comes up like clockwork. Usually, it’s a founder who just got their first Mailchimp report, or a marketing director staring at a dashboard wondering if they should be celebrating or panic-hiring. “What’s a good open rate?” they ask. And they want a number. They want to hear “22%” or “18%” so they can exhale and move on with their day.

Here’s what I’ve learned after managing email programs for SaaS companies with million-subscriber lists, tiny e-commerce brands just getting started, and nonprofit clients where every open literally translates to a meal served or a petition signed: the question itself is a trap.

Not because open rates don’t matter. They do. But because asking for a universal “good” open rate is like asking what a good salary is. For a software engineer in San Francisco? For a teacher in rural Mississippi? For a freelance artist in Berlin? The number is meaningless without context. And when you take that context-free number back to your team, you’re making decisions based on a ghost.

Why “What is a good open rate?” is a trick question

The email marketing industry has spent twenty years training marketers to ask the wrong question. Software platforms show you a number, color-code it green or red, and suddenly that number becomes a proxy for your entire competence. But open rates don’t exist in a vacuum. They are the result of a chain of events that started long before the send button was clicked.

I’ve consulted for a skincare brand that launched with a list of 500 friends, family, and early Kickstarter backers. Their first campaign opened at 68%. They thought they were geniuses. Six months later, after scaling to 15,000 subscribers through a pop-up discount, their open rate settled at 12%. The list hadn’t gotten worse. The audience had changed. The open rate was telling a story about who was on the list, not just about how good the subject lines were.

That’s the trick. Open rates are primarily a reflection of audience composition and deliverability infrastructure. Content matters—I’m not saying it doesn’t. But when someone asks me for a benchmark number, my first question back is always: “Tell me about your list. Where did it come from? How old is it? What’s your double opt-in rate?” If I don’t get answers to those questions, any number I give is just noise.

The danger of fixating on a single number

I’ve watched marketers do genuinely destructive things because they were chasing a number they read in a blog post.

They start buying lists—which is a fast track to spam folder purgatory—because they see their open rate dip below some mythical industry average and panic. They start sending clickbait subject lines that get the open but destroy trust with the audience. They stop emailing engaged segments because they’re afraid to annoy people, letting warm leads go cold while they obsess over a metric that was never meant to be a primary KPI in the first place.

Here’s what actually happens when you fixate on a single number: you optimize for the number, not the business outcome.

I worked with a B2B software company a few years back. Their open rate was hovering around 18%, which according to some benchmarks was “average.” The head of marketing was obsessed. We ran tests. We changed subject line structures, send times, preheader strategies. We got the open rate up to 24% in three months. Everyone was thrilled.

But revenue from email didn’t move. Not a dollar.

Why? Because we had optimized for opens, not conversions. We were writing subject lines that were curiosity-driven but disconnected from the offer. People opened, saw the email didn’t match the intrigue, and bounced. Click-through rates actually dropped during this period, even as opens went up.

The single-number fixation almost cost them a quarter of revenue growth because we were solving for the wrong variable. That’s the danger. You end up with a dashboard that looks good and a bank account that doesn’t.

The Problem with Averages: Apples vs. Oranges

I want to be clear about something: benchmarks exist for a reason. They give you a sense of the landscape. But they are descriptive, not prescriptive. Knowing that the average open rate for e-commerce is around 15% doesn’t tell you what your open rate should be. It tells you what everyone else’s is, across thousands of businesses with wildly different lists, products, and audiences.

Industry benchmarks: A starting point, not a finish line

Let’s look at the real data. I keep a running spreadsheet of benchmark reports from the major email service providers—Mailchimp’s annual report, Constant Contact’s benchmarks, HubSpot’s marketing statistics, and a few industry-specific reports from Klaviyo and Omnisend. The patterns are consistent, but the spread is enormous.

Nonprofit & religious organizations (The 45% club)

I’ve worked with two different nonprofits. One was a large international aid organization. The other was a regional food bank. Both had open rates that would make most marketers weep with envy—consistently in the 40-50% range.

Why? Because the audience wants to hear from them.

Nonprofit subscribers are self-selected. They signed up because they care about the mission. They’re not hunting for a discount code or entering a contest. They’re there because they want to be there. Add to that the fact that many nonprofit lists skew older, and older demographics still treat email as a primary communication channel, not a spam folder they occasionally glance at.

The food bank I worked with had a welcome email open rate of 73%. That’s not a typo. Seventy-three percent.

But here’s the catch: their click-through rate on fundraising emails was often under 2%. High opens, low action. Because people wanted to stay informed about the mission but weren’t always ready to donate. If I had benchmarked them against e-commerce and decided their 45% open rate was “good enough,” I would have missed the actual problem: they weren’t converting opens into donations.

E-commerce & retail (The 8-15% reality)

Now let’s talk about the other end of the spectrum.

E-commerce open rates are brutal. I’ve managed programs for D2C brands where a 12% open rate was considered healthy. Why? Because e-commerce lists are often built through discounts. Someone buys a $40 sweater, they get added to the list. They weren’t signing up because they wanted weekly emails about sweater trends. They signed up because they wanted 15% off their first order.

The email address becomes a transactional artifact, not a relationship signal.

Add to that the sheer volume of e-commerce email. The average consumer gets dozens of retail marketing emails a day. The competition for attention is fierce. And many of those subscribers are using promotional Gmail accounts where emails get buried in the “Promotions” tab faster than you can hit send.

I had a client in the outdoor gear space. Their abandoned cart emails opened at 28%—great. Their weekly newsletter opened at 7%. Same audience, same sender, vastly different open rates. Because the context of the email matters. A cart abandonment email is timely, transactional, expected. A newsletter is noise unless it’s exceptional.

B2B & SaaS (The 20-30% sweet spot)

B2B lands in the middle, but with its own complications.

SaaS open rates tend to hover in the 20-30% range, but the distribution is bimodal. You have the transactional emails—onboarding sequences, feature announcements, account updates—that often clear 40% or more. And then you have the nurture campaigns, the thought leadership content, the “we wrote a blog post” emails that struggle to break 15%.

The difference is intent. A new user who just signed up for a trial? They’re going to open your emails because they need to figure out how to use your product. A lead who downloaded a white paper six months ago? They’ve forgotten who you are.

I managed email for a project management SaaS. Their onboarding sequence consistently opened at 52%. Their weekly product update newsletter opened at 19%. Both were “good” for what they were. If I had averaged them together and said “our open rate is 35%,” I would have been lying to myself about the health of both programs.

Source credibility: Why Mailchimp data differs from HubSpot data

Here’s something most benchmark articles won’t tell you: the source of the data matters as much as the data itself.

Mailchimp’s benchmark report is massive—they analyze billions of emails. But Mailchimp’s user base skews heavily toward small businesses, solopreneurs, and early-stage companies. Their data reflects that. A small bakery using Mailchimp to send a weekly “today’s flavors” email is not the same as a mature B2B SaaS using Marketo.

HubSpot’s benchmarks, by contrast, pull primarily from their B2B customer base. Their numbers tend to be lower on opens but higher on click-through rates, because B2B audiences are more selective about what they open but more likely to engage once they do.

Klaviyo’s data is e-commerce first. Their open rates are often lower than Mailchimp’s overall averages, but their revenue-per-email numbers are higher, because they’re measuring different things.

If you pull a number from one report and treat it as gospel, you’re comparing your business to a cohort that might not look anything like your business. It’s like looking up the average height of humans and asking why your NBA player nephew doesn’t fit the number.

The Three Variables That Render Averages Useless

If I’m brought in to look at an email program, I ignore the open rate entirely for the first hour. I want to know three things: where the list came from, how old it is, and what the sender reputation looks like. These three variables predict open rate more accurately than any subject line test ever will.

Audience source: The warm lead vs. cold lead gap

The single biggest determinant of open rate is whether the person on the other end actually wants to hear from you.

Past customers vs. cold traffic

I’ve run the numbers across dozens of accounts. Past customers open at 2-3x the rate of cold leads. Sometimes more. It’s not even close.

Why? Because they already trust you. They’ve given you money. They know what to expect. When a past customer sees your name in their inbox, it’s a known quantity. When a cold lead sees it, it’s one of fifty promotional emails they’ll get today.

One of my clients in the supplement space had a list of 80,000 people. About 15,000 were past customers. The past customers opened at 32%. The cold leads opened at 9%. Same sender, same campaigns, vastly different results.

If I had looked at the blended average of 14% and called it a day, I would have missed the fact that their cold acquisition funnel was broken. The problem wasn’t email strategy. The problem was that they were driving traffic to a lead magnet that attracted low-intent users.

Double opt-in vs. single opt-in lists

This one’s controversial, but the data is clear: double opt-in lists outperform single opt-in lists on open rates by 20-40%.

Double opt-in means someone gives you their email address, then clicks a confirmation link to verify they actually want it. The friction is real—you lose about 15-20% of signups in that confirmation step. But the ones who make it through are genuinely interested.

Single opt-in lists are bigger. Double opt-in lists are better.

I had a client who switched from single to double opt-in. Their list growth slowed by about 18%. Their open rate on the first campaign after the switch jumped 27%. The people who stayed wanted to be there.

This isn’t a moral argument about consent. It’s a practical one about engagement. If you’re optimizing for list size, you’re optimizing for lower open rates. Those two goals are in direct tension.

List age: The decay factor

Email lists rot.

It’s not a metaphor. Every month, about 2-3% of your list decays—people change jobs, abandon email addresses, or simply stop checking that account. After a year, 20-30% of your list is effectively dead.

I’ve seen it happen over and over. A brand builds a list aggressively for two years. They hit 100,000 subscribers. They’re thrilled. But they never cleaned the list. By year three, their open rate has dropped from 25% to 12%. They panic. They blame the subject lines. They blame the content. The real problem is that half their list is comprised of addresses that haven’t engaged in 18 months.

The math is unforgiving. If you’re adding 5,000 subscribers a month but losing 3,000 to natural decay and disengagement, your list is only marginally healthier. Most brands don’t track this. They track total subscribers. Then they wonder why open rates keep falling.

Sender reputation: The invisible hand

This is the variable nobody wants to talk about because it’s technical and boring, but it’s also the one that can kill you overnight.

Sender reputation is how email providers like Gmail, Outlook, and Yahoo score your domain and IP address. It’s based on engagement rates, spam complaints, bounce rates, and whether you’re sending to known spam traps. If your reputation drops, your emails go to spam. It’s that simple.

I worked with a brand that had a pristine reputation. Their emails always landed in the primary inbox. Open rates were consistently in the high 20s. Then they hired a growth consultant who convinced them to buy a list of 50,000 “targeted” leads. They sent one campaign to that list. Bounce rates spiked to 15%. Spam complaints flooded in. Gmail flagged them.

Their open rate on the next campaign dropped to 4%. It took six months of careful list cleaning and engagement campaigns to recover.

That’s the invisible hand. You don’t see it until it slaps you. And once it does, all the clever subject lines in the world won’t matter because your emails aren’t reaching inboxes.

Redefining “Good” for Your Business

If you’ve made it this far, you’ve probably realized that the question “What is a good open rate?” is starting to feel like a trick. Good. That’s the point.

Your only true benchmark: Your own historical data

I tell every client the same thing: your benchmark is you. Not some aggregate report. Not what your competitor claims their open rate is. You.

Track your open rate month over month, segment by segment. Understand the patterns. Your welcome email opens will be higher than your newsletter opens. Your abandoned cart opens will be higher than your post-purchase follow-ups. That’s not a problem to solve. That’s just how email works.

What you’re looking for is trend lines. Is your open rate trending up or down within each segment? Are new subscribers engaging at the same rate they were six months ago? Are your re-engagement campaigns performing better or worse than last quarter?

Those are the questions that lead to action. A static number compared to an irrelevant benchmark leads to anxiety.

Setting realistic KPIs based on your funnel stage

Your open rate expectations should shift depending on where the subscriber is in the funnel.

For a brand new lead who just opted in through a content download? I’m happy with 20-25% opens. That tells me they’re interested enough to see what happens next.

For a recent purchaser? I expect 40-50% opens on post-purchase emails. If it’s lower than that, something’s wrong with the timing or the offer.

For a subscriber who’s been on the list for two years and opens sporadically? I’m not expecting miracles. But I am watching to see if they ever engage with a re-engagement campaign.

Setting one KPI for your entire email program is like setting one KPI for your entire business—revenue. It’s technically accurate and completely useless for decision-making.

Focus on trend lines, not isolated numbers

Here’s the mindset shift I try to instill in every marketing team I work with: stop looking at the number. Start looking at the direction.

An open rate of 18% sounds average. But if that 18% is up from 14% six months ago, you’re doing something right. If it’s down from 22%, you’re doing something wrong. The direction tells you more than the number ever will.

I worked with a skincare brand whose open rate dropped from 24% to 19% over the course of a year. They were panicked. When I dug into the data, I found that their list had grown 300% in that same period, and the new subscribers were from lower-intent sources—contest entries, discount seekers, social media followers who wanted a freebie. The existing subscribers were still opening at the same rate. The blended number dropped because the composition of the list changed.

Was that a problem? Maybe. It meant they were building a list of lower-quality subscribers, which would eventually impact deliverability if left unchecked. But it wasn’t a crisis. It was a signal to revisit their acquisition strategy, not their email content.

If they had fixated on the isolated number—19% “good” or “bad”?—they would have missed the actual business decision they needed to make.

This is how you treat open rates like a professional. Not as a report card. Not as a vanity metric to post on LinkedIn. As a diagnostic tool that tells you something about your audience, your infrastructure, and your acquisition channels.

The next time someone asks you what a good open rate is, don’t give them a number. Ask them about their list. Ask them about their double opt-in rate. Ask them about their last deliverability audit. Ask them about the trend over the last six months.

If they can answer those questions, they don’t need a number. If they can’t, the number won’t help them anyway.

Decoding Apple’s Mail Privacy Protection (MPP): How the 2021 Update Changed Everything

Introduction – The Day the Metric Broke

A brief history of the open rate (pre-2021)

Before September 2021, the open rate was something we trusted. Not blindly—anyone who’d been in email for more than five minutes knew about image-blocking and the fact that opens were technically just a tracking pixel loading. But for the most part, if someone opened your email, the pixel fired, and you knew. It was as close to a reliable signal as email marketing had.

I remember building dashboards in the mid-2010s where open rate was the north star. We’d run A/B tests on subject lines with sample sizes calculated to hit statistical significance based on opens. We’d segment lists by “active opens” and “inactive opens” and build re-engagement campaigns around those definitions. We’d report to leadership with confidence: “Our open rate this quarter is 22.4%, up from 21.8% last quarter.”

It wasn’t perfect. There were always the image-blocking holdouts—Outlook users, mostly, and people who’d configured their email clients to not load images by default. We knew those opens weren’t being tracked. But it was a consistent blind spot. The same percentage of people across campaigns and senders had images blocked, so we could compare apples to apples, more or less.

The system worked well enough that we stopped questioning it. Open rates became embedded in every email marketing playbook, every benchmark report, every agency pitch deck. They were the metric.

Then Apple lit a match.

The announcement that shook the email marketing world

I was on a client call when the news broke. June 2021, Apple’s Worldwide Developers Conference. They announced Mail Privacy Protection as part of iOS 15 and macOS Monterey. I remember reading the feature description and feeling my stomach drop.

Here’s what Apple said: when a user enables Mail Privacy Protection, their email client will pre-load all email content—including tracking pixels—in the background, regardless of whether the user actually opens the email. The pixel fires whether the email is read, glanced at, or never touched.

I sat there doing the math in my head. If Apple Mail users—which accounted for somewhere between 35% and 50% of most consumer-facing lists—started pre-loading pixels, then every email sent to them would register as an open. Even if the email went straight to the trash. Even if they never looked at it. Even if they deleted it without a glance.

The client asked if I was okay. I said yes. I was not okay.

Over the next few months, as iOS 15 rolled out to users, the reality set in. Open rates across the industry started climbing. Not because people were suddenly more engaged, but because Apple had fundamentally broken the metric. The one number we’d been using to measure attention, to segment audiences, to report performance—it was now, for a huge chunk of our lists, completely fictional.

I had clients calling me in October and November of 2021 asking why their open rates had jumped 20 percentage points overnight. “We’re crushing it,” one founder told me. “What did we change?” Nothing, I said. Apple changed it for you.

That was the day the metric broke.

How MPP Works: The Technical Breakdown

The pre-load proxy: Why “opens” are no longer intentional

Let me explain what’s actually happening under the hood, because understanding the mechanism helps you understand why the old rules no longer apply.

Every marketing email contains a tiny, invisible image—usually a 1×1 pixel GIF—hosted on the sender’s server. When an email client loads that image, the server logs a request. That request is what we call an “open.” Pre-MPP, that request only happened when the user actually opened the email and allowed images to load.

With Mail Privacy Protection, Apple’s mail client does something different. When an email lands in an MPP-enabled inbox, Apple’s servers—not the user’s device—fetch all the content of that email, including the tracking pixel, and cache it. The pixel fires on Apple’s servers, not on the user’s phone or computer.

So here’s what happens: You send an email to a subscriber with MPP enabled. Apple’s servers immediately load the email, fire the tracking pixel, and cache the content. Later, if the user actually opens the email, they’re loading it from Apple’s cache, not from your server. The pixel never fires again.

The result? You get an “open” for every single email sent to that subscriber, regardless of whether they ever looked at it.

This isn’t a bug. It’s a feature Apple deliberately built. They told us exactly what they were doing. Their stated goal was privacy—preventing senders from knowing when and where users open emails. And they accomplished it brilliantly, from a privacy standpoint. From a marketer’s standpoint, they pulled the floor out from under us.

I’ve had marketers ask me if there’s a way to detect MPP opens and filter them out. The short answer is no. You can identify which opens likely came from MPP based on user agent and behavior patterns, but you can’t know with certainty. The pixel fired. The server logged it. That’s an open, as far as your ESP is concerned.

Identifying the affected audience

Not everyone on your list is affected by MPP. Knowing who is—and who isn’t—is the first step to making sense of your data again.

Apple Mail users (iOS & macOS)

MPP affects users who meet two conditions. First, they need to be using the Apple Mail app—the default email client on iPhones, iPads, and Macs. Not Gmail’s app. Not Outlook’s app. Not Spark or Superhuman or any other third-party client. Apple Mail.

Second, they need to have Mail Privacy Protection enabled. When iOS 15 first launched, Apple presented users with a prompt asking if they wanted to enable it. Most users clicked yes, because it was presented as a privacy feature and required zero effort. In my experience, the opt-in rate is somewhere between 70% and 90% of Apple Mail users. That’s not official Apple data—they don’t release it—but it’s consistent across the accounts I’ve managed.

So the affected audience is: people who use Apple Mail and who said yes to the privacy prompt.

That’s not everyone. Gmail app users aren’t affected. Outlook app users aren’t affected. People who use Apple Mail but clicked “no” aren’t affected. But the segment is significant enough that you can’t ignore it.

Estimating your list impact (The 35-50% rule)

Here’s the rule of thumb I use across accounts: if your audience is primarily consumer-facing, expect 35-50% of your list to be MPP-affected. If you’re B2B, that number drops—sometimes significantly—because business users are more likely to use Outlook or Gmail for work accounts.

I managed a D2C apparel brand where MPP-affected subscribers accounted for 48% of the list. A B2B SaaS client came in at 22%. A nonprofit with a donor base that skewed older? 54%, because older demographics are more likely to use whatever came default on their iPhone.

The variation matters. If you’re looking at your overall open rate without segmenting by email client, you’re looking at a number that combines MPP-inflated opens from one segment with real opens from another. It’s like averaging the temperature in a freezer and a furnace and wondering why your food won’t cook.

I pull a client’s email client breakdown in every audit now. If I don’t see a clear segmentation between Apple Mail and other clients, I know they’re flying blind.

The Fallout: What MPP Did to Your Data

The artificial inflation of open rates

The most immediate and obvious impact was the inflation. Open rates went up across the board, but not evenly. Lists with high Apple Mail penetration saw dramatic spikes. Lists with low penetration saw barely any movement.

I had a client in the outdoor gear space whose open rate jumped from 19% to 34% over a three-month period. They’d changed nothing about their email program. The 34% was a fiction. The real open rate among non-MPP users was actually slightly down during that period—something we only discovered after segmenting.

The inflation creates a weird dynamic where your reported open rate becomes a function of your list composition rather than your email quality. Add more Apple Mail users? Your open rate goes up. Lose them to churn? Your open rate goes down. You can be doing everything right with your subject lines and content and see your open rate drop because your audience shifted toward Android users.

I’ve had to walk founders off ledges multiple times since 2021. They see a drop and panic. Then we pull the segment data and realize the drop is just a shift in list composition, not a failure of strategy. But if you’re not segmenting, you don’t know that. You just see a number moving and assume you’re the cause.

The distortion of A/B testing results

This one’s more subtle but just as destructive.

Pre-MPP, A/B testing subject lines was straightforward. You’d split your audience, send two versions, and measure which had a higher open rate. The winner was the one that got more people to click.

Post-MPP, if you’re still measuring A/B tests by open rate, you’re testing something entirely different. You’re testing which subject line gets pre-loaded by Apple’s servers at a higher rate? No, that’s not how it works. The open rate for MPP users is 100% regardless of subject line. So your test results are now being diluted by a segment that will open every email no matter what.

I saw this happen with a client who ran a subject line test. Version A had a 28% open rate. Version B had a 31% open rate. The marketing manager declared Version B the winner and rolled it out to the whole list.

When I segmented the data, the non-MPP open rates were 18% for Version A and 17% for Version B—Version A had actually performed better among real opens. The overall numbers were flipped because the MPP segment (which opened everything) was larger in the Version B group due to random variation in the split.

The client had been optimizing for a metric that no longer meant what they thought it meant.

If you’re running A/B tests post-MPP, you have two options. First, you can test on click-through rate instead of open rate. That’s what I recommend for most clients. Second, you can filter out MPP users from your test and only measure non-Apple clients. Most ESPs allow this if you know how to set up the segments.

But if you’re still running subject line tests and declaring winners based on open rate without accounting for MPP, you’re making decisions on bad data.

Why location data is now mostly useless

This is the fallout that doesn’t get talked about enough.

Pre-MPP, we could see roughly where opens were happening. City, state, country. It was useful for timing sends, localizing content, understanding geographic engagement patterns.

Now, with MPP, the location data you get is the location of Apple’s proxy servers. Not your subscriber. Those servers are scattered across data centers—many of them in Northern Virginia, where AWS has a massive presence, or in other major data hub locations.

I’ve looked at location reports where 40% of “opens” were showing from Ashburn, Virginia. Unless I’m running a business that inexplicably appeals to residents of suburban DC data centers, those aren’t real opens. They’re MPP.

For B2B companies with sales territories, this is a genuine problem. If you’re using open location data to identify which accounts are engaging, you’re now seeing a massive cluster of “engagement” from Virginia that means nothing. It’s noise.

I’ve stopped using open location data entirely for most clients. If a client asks about geographic engagement, I look at click data instead. Clicks still come from the actual user’s device. That data is still reliable.

How to Navigate the Post-MPP Landscape

Segmenting Apple users vs. non-Apple users for accurate analysis

If you take nothing else from this, take this: segment your data.

I set up a segment in every ESP I work with that isolates non-Apple Mail users. Sometimes it’s “email client is not Apple Mail.” Sometimes it’s more specific—”email client is Gmail, Outlook, Yahoo, or Other.” The goal is to create a view of your data that excludes the MPP noise.

When I report to leadership now, I show two numbers. The overall open rate, with a caveat: this number is inflated by Apple’s privacy protections and should be considered directional only. And the non-Apple open rate, which is the closest thing we have to a true engagement signal.

The difference between these two numbers tells you something. If they’re close, your list is mostly non-Apple or mostly opted out of MPP. If they’re far apart, you have a high MPP penetration and your overall open rate is largely a fiction.

I also build segments for automation triggers. If I’m setting up a re-engagement campaign based on open inactivity, I cannot use open data from Apple Mail users. They’ll never be inactive by that definition. I either use non-Apple users for that automation or use click data instead.

This adds complexity to setup. There’s no way around it. The old days of a simple “if not opened in 90 days” trigger are over. That trigger will now keep Apple users in your active flow forever, regardless of whether they’ve ever looked at an email.

The metric hierarchy shift: Introducing CTOR and Click Rates

If open rates are now broken, what do we use instead? This is where the real work of post-MPP email strategy happens.

Why Click-Through Rate (CTR) is now the king

CTR hasn’t been affected by MPP. A click still requires the user to actually engage with the email, to move their cursor or thumb, to take action. Apple’s servers aren’t clicking links on your behalf. Not yet, anyway.

I’ve shifted nearly all of my reporting and optimization to focus on CTR. When I’m evaluating a campaign’s performance, the first number I look at is click rate, not open rate. When I’m running A/B tests, I test on clicks. When I’m segmenting audiences for engagement, I use clicks as the primary signal.

This takes some adjustment, especially for stakeholders who’ve been trained to look at opens. I’ve had to re-educate multiple leadership teams on why we’re “ignoring” opens. The explanation is always the same: we’re not ignoring them, we’re using them appropriately. They’re a directional signal. Clicks are what actually drive revenue.

Introducing Click-to-Open Rate (CTOR) as the quality check

CTOR is clicks divided by unique opens. It tells you, of the people who opened your email, what percentage clicked.

Here’s why CTOR matters post-MPP: it gives you a quality check on your opens.

If your open rate is inflated by MPP, your CTOR will drop—because you have a bunch of fake opens that will never click. A declining CTOR is often the first sign that MPP is distorting your numbers. It’s also a useful way to compare campaigns across time. If your open rate is up but your CTOR is down, you’re probably seeing MPP inflation, not real improvement.

I use CTOR to evaluate content quality. Subject lines and preheaders drive opens. Content drives clicks. CTOR bridges the two. If CTOR is high, your content is resonating with the people who actually opened. If CTOR is low, you have a mismatch between what your subject line promised and what your email delivered.

Pre-MPP, I could infer that from open rate and click rate. Post-MPP, CTOR is essential because open rate alone no longer tells you how many real people are actually reading.

Accepting the open rate as a directional signal, not a truth

This is the mindset shift that separates the marketers who’ve adapted from the ones who are still stuck in 2020.

Open rates are not gone. They’re just different. They’re no longer a precise measurement of engagement. They’re a rough indicator of deliverability and subject line effectiveness, filtered through the lens of your list composition.

I still look at open rates. I just don’t trust them like I used to.

When I see a campaign with a 40% open rate, I don’t celebrate. I check the non-Apple segment. If that’s also 40%, I celebrate. If it’s 15%, I know the 40% is mostly MPP noise and I need to look at CTR to understand what actually happened.

When I see a declining open rate over time, I don’t panic. I look at list composition first. Are we adding more non-Apple users? Are we losing Apple users to churn? The trend might be real, or it might be a demographic shift.

The professional approach post-MPP is not to throw open rates out entirely. It’s to treat them with the skepticism they deserve. They’re one data point among many. They’re directional. They’re contextual. They’re no longer the center of the universe.

I’ve had clients ask if they should just stop reporting open rates. I don’t recommend that. Leadership still wants to see them. Investors still ask. But I do recommend showing them alongside click rates, CTOR, and non-Apple segments. Give the number context. Explain what it means and what it doesn’t.

If you’re still reporting a single open rate number in a dashboard and treating it as the primary indicator of email health, you’re not doing post-MPP email marketing. You’re doing pre-2021 email marketing with a broken metric.

Apple changed the rules in 2021. The professionals adapted. The rest are still wondering why their open rates look great but their revenue isn’t moving.

The Anatomy of a High-Converting Subject Line: Psychology Over Clicks

Introduction – The Gatekeeper of the Inbox

The 3-second window to capture attention

I’ve sat next to enough people watching their inboxes to know how this actually works. They’re not reading. They’re scanning. Thumb on the screen, scrolling through a list of senders and subject lines at a pace that would make a speed-reader dizzy. Stop. Delete. Stop. Open. Stop. Archive. It takes about three seconds from the moment they glance at your name to the moment they decide whether you live or die.

Three seconds.

That’s less time than it takes to tie a shoe. Less time than it takes to register what you’re looking at and make a judgment call. In that window, your subject line has to do something remarkable. It has to interrupt the pattern. It has to make a brain that’s already processing forty other emails today stop and say, “Wait. What’s this?”

I learned this lesson the hard way about ten years ago. I was running email for a SaaS company and we’d spent weeks crafting what we thought was the perfect newsletter. Great content. Strong design. Clear CTA. We sent it to a list of 50,000 with the subject line “October Product Updates.” The open rate was 7%. Seven. We might as well have not sent it.

The next month, I changed the subject line to “We fixed the thing you hated.” Same list. Same content. Same day of week. Open rate hit 24%. The content hadn’t changed. The offer hadn’t changed. The only thing that changed was whether people bothered to look.

That was the moment I stopped treating subject lines as metadata and started treating them as the primary creative challenge of email marketing. Because they are. The rest of the email doesn’t matter if no one opens it. And no one opens it if the subject line doesn’t earn their attention in the time it takes to blink.

The cost of a bad subject line (lower deliverability)

Here’s what most people don’t understand about subject lines. The cost of a bad one isn’t just a low open rate. It’s a slow death by reputation damage.

Email providers like Gmail and Outlook watch what people do with your emails. When someone gets an email from you and deletes it without opening, that’s a signal. When they mark it as spam, that’s a louder signal. When enough people ignore your emails, the algorithm learns. It starts routing you to the Promotions tab if you’re lucky, or the spam folder if you’re not.

I managed a list once where the previous marketer had a habit of writing subject lines that were basically lies. “Your account has been suspended” when the account was fine. “Urgent: Action required” when the email was just a feature announcement. The open rates were actually decent because the subject lines were manipulative. But the spam complaints were through the roof. People opened, felt tricked, and marked the email as spam out of annoyance.

By the time I took over, the domain reputation was shot. We were hitting spam folders at Gmail and Outlook at a rate of about 40%. No amount of great content was going to fix that. We had to rebuild reputation from scratch, which took six months of meticulously clean sending, high engagement rates, and absolutely zero gimmicks.

A bad subject line doesn’t just cost you that send. It costs you the ability to reach the inbox on future sends. The algorithm remembers.

The Psychology of the Click

Curiosity gap: The Zeigarnik effect

There’s a psychological principle called the Zeigarnik effect. Named after a Russian psychologist who noticed that waiters could remember complex orders from customers who hadn’t paid yet, but forgot them immediately after the bill was settled. The brain holds onto unfinished business. Open loops create tension. Tension demands resolution.

The curiosity gap is just this principle applied to subject lines. You create a gap between what the reader knows and what they want to know. Then you offer to close that gap if they open the email.

I’ve used this a thousand times. The formula is simple: make a statement that implies information the reader doesn’t have, without giving it away.

Examples of incomplete narratives

Here are subject lines I’ve written that worked because they left something out:

“What happened when we stopped running Facebook ads” — the implication is a story with a lesson. The reader wants to know the outcome. Did revenue crash? Did it stay the same? Did they discover something unexpected? They have to open to find out.

“The one metric we don’t report to the board” — this one works because it implies insider knowledge, a peek behind the curtain. There’s a secret here. If you open, you’ll be in on it.

“I’m about to do something stupid” — from a founder I worked with. It was a subject line for a fundraising email. The gap was obvious: what stupid thing? Is this a joke? Is it serious? The open rate was 47%.

The key is that the gap has to be compelling. “I have a question for you” creates a gap, but it’s vague and generic. “I have a question about your hiring plans” creates a gap that’s specific enough to matter to someone who’s hiring. The more specific the gap, the more likely the reader feels it applies to them.

The mistake I see marketers make is trying to close the gap in the subject line. They want to be clever and complete. But if you answer the question in the subject line, there’s no reason to open the email. The curiosity gap only works if you leave it open.

Personalization: Beyond just {First Name}

First name personalization is table stakes. It’s been table stakes for fifteen years. It doesn’t impress anyone. It doesn’t even register with most people anymore. If your idea of personalization is “Hey {{first_name}},” you’re not doing personalization. You’re doing mail merge.

Real personalization is behavioral. It’s using what you know about what someone has done to make the subject line relevant to them.

Behavioral personalization (“We noticed you left this behind”)

Abandoned cart subject lines are the classic example here, and they work for a reason. “You left something behind” isn’t generic. It’s specific to the person receiving it. They know they were looking at that product. The subject line reminds them of an unfinished action.

But you can take this further.

I worked with a B2B software company that sent a subject line that said “You’ve logged in 47 times this month.” The email was a simple check-in asking if they needed help with anything. The open rate was 58%. Because the subject line referenced actual behavior. The person receiving it knew it was true. They’d been using the product heavily. The subject line felt like it was written for them, because it was.

Another client in the education space sent “You started the course but didn’t finish Module 3.” Simple. Specific. Behavioral. The open rate was three times their average.

The principle is this: use data you have about the subscriber to make the subject line true for them. Not true in a generic way. True in a way that they know is true. That’s what cuts through the noise. When someone reads a subject line that references their actual behavior, they don’t think “marketing email.” They think “this is about me.”

Urgency & Scarcity: Fear of missing out (FOMO)

Urgency works. I wish it didn’t, because it’s been so overused that people are starting to develop immunity. But the data doesn’t lie. Subject lines that imply limited time or limited availability consistently outperform neutral subject lines.

The reason is evolutionary. Humans are wired to avoid loss more than they’re wired to seek gain. Losing something hurts more than gaining something feels good. Urgency triggers that loss aversion. The reader perceives that if they don’t act, they’ll lose something—a discount, an opportunity, access to something valuable.

I’ve run enough tests to know that urgency subject lines almost always win against non-urgency versions. The trick is making the urgency credible.

“Sale ends tonight” works if the sale actually ends tonight. If the sale ends tonight and then there’s another sale next week with the same discount, you’ve burned trust. I’ve seen brands do this repeatedly—a “24-hour flash sale” that appears every three days. Eventually, the audience learns. The urgency stops working because it’s never real.

The subject lines that work long-term are the ones where the scarcity is genuine. “Only 3 spots left for the workshop.” “Early bird pricing expires in 6 hours.” “Your trial ends tomorrow.” These work because they’re true, and the reader knows they’re true.

I had a client who ran a membership site. They sent a subject line that said “We’re closing doors for 30 days.” The email explained they were pausing new signups to focus on existing members. The urgency wasn’t manufactured. It was a real window closing. That subject line did a 62% open rate.

Problem-Agitation-Solution (PAS): The pain point opener

This is a copywriting framework that translates beautifully to subject lines. Problem. Agitate. Solution. You name a problem the reader has. You agitate it—make them feel why it’s a problem. Then you offer the solution in the email.

In a subject line, you usually only have room for the problem or the agitation. But if you pick the right problem, it’s enough.

“Struggling to hit your revenue targets?” — that’s a problem subject line. If the reader is struggling, they’ll open. If they’re not, they won’t. That’s fine. Targeting matters.

“Why your team keeps missing deadlines” — this is agitation. It implies there’s a reason for a known problem, and the email will explain it.

“Tired of losing candidates to competitors” — problem subject line for a recruiting email. It worked because the audience was recruiting managers who were, in fact, losing candidates to competitors.

The key to PAS subject lines is specificity. “Having problems with your business?” is too vague. “Having problems with cash flow?” is better. “Why your cash flow problems are getting worse” is better still. The more specific the problem, the more the reader feels seen.

The Technical Craft of Subject Lines

Mobile-first optimization (The 30-40 character limit)

I don’t care what your subject line looks like on desktop. Neither does your audience. The majority of email opens happen on mobile. I’ve seen numbers ranging from 60% to 80% depending on the industry and audience. If your subject line is optimized for a 27-inch monitor, you’re optimizing for the minority.

On an iPhone, the subject line gets cut off after about 30-40 characters, depending on the iOS version and whether there’s an emoji. Everything after that lives in the ellipsis. People don’t tap to expand. They make their decision based on the first 30 characters.

This changes how I write subject lines. The most important words go first. Not the clever twist at the end. Not the punchline. The hook.

If I’m writing “You won’t believe what happened when we stopped running ads,” that’s 50 characters. On mobile, it becomes “You won’t believe what happened when we…” The reader doesn’t see the specific context. They see a generic curiosity line.

If I write “We stopped running ads. Here’s what happened,” that’s 40 characters. On mobile, it shows “We stopped running ads. Here’s what…” The hook lands. The context is clear.

I write subject lines in a text file on my phone now. If I can’t read the full thing in the preview without tapping, I rewrite it. This isn’t optional. It’s the reality of where and how people read email.

The emoji debate: When to use them (and when to avoid them)

I’ve run tests on emojis across probably fifty different accounts at this point. The results are inconclusive in the way that makes marketers uncomfortable. Sometimes emojis lift open rates. Sometimes they hurt them. It depends entirely on the audience and the context.

What I can tell you is what I’ve observed.

Emojis work in B2C consumer brands, especially in lifestyle, fashion, food, and entertainment. They add personality. They break up the monotony of text-only subject lines. They signal that the brand is approachable, human.

Emojis tend not to work in B2B, especially in finance, legal, healthcare, or any industry where professionalism is paramount. A subject line with a rocket emoji from a SaaS company? That’s fine. A subject line with a money bag emoji from a wealth management firm? That looks amateur.

I also think emojis work best when they reinforce the message rather than replacing it. “🚨 24 hours left 🚨” is fine but it’s also a crutch. “Your cart is expiring 🛒” uses the emoji to add visual context, not to carry the meaning.

The worst use of emojis I’ve seen is when they’re completely unrelated to the content. A random smiley face. A party popper for a serious announcement. It creates a mismatch that makes the brand look like it doesn’t know what it’s doing.

My rule is this: test emojis if your brand voice allows them. But if you’re not sure, leave them out. A good subject line without an emoji outperforms a mediocre subject line with one.

The power of “you” and possessive pronouns

This is subtle but I’ve seen it move the needle consistently. Subject lines that use “you,” “your,” and “yours” outperform those that use “we,” “our,” and “us.”

The reason is simple. People care about themselves more than they care about your company. A subject line that’s about them gets their attention. A subject line that’s about you gets ignored.

“We’ve launched a new feature” is about the company. The reader thinks: that’s nice for you.

“You can now save 3 hours a week with this new feature” is about the reader. The reader thinks: tell me how.

I’ve rewritten subject lines for clients to flip the pronoun and seen open rates increase by 10-20% with no other changes. It’s not magic. It’s just aligning your message with what the reader actually cares about.

Implementing a Winning Subject Line Strategy

How to set up a proper A/B test (Statistical significance)

Most A/B testing I see is not actually A/B testing. It’s guessing with extra steps.

People will send a subject line to 10% of their list, see which one has a higher open rate, declare a winner, and send it to the remaining 90%. That’s not testing. That’s rolling dice. The sample sizes are too small, the test is underpowered, and the result is usually not statistically significant.

I have a rule for subject line tests. You need at least 1,000 recipients per variation to get meaningful data. That’s the floor. If your list is smaller than that, you don’t have enough volume to test. You’re better off using best practices and waiting until you have scale.

When I run a test, I do it the same way every time. I select a segment that’s large enough—usually 20-30% of the list. I split it evenly. I send two variations, varying only one element. Not subject line and preheader. Not subject line and send time. One variable. Subject line only.

Then I wait 24 hours. I check the open rates and I run a statistical significance calculator. If one variation is winning with 95% confidence, I declare a winner and send it to the rest of the list. If the test is inconclusive, I send the control to the rest of the list and run another test next time.

I’ve seen people declare winners based on a 1% difference with 200 recipients per variation. That’s not data. That’s noise.

The other mistake I see is testing too many variables at once. A/B/C/D/E tests with five different subject lines. Now you’re splitting your sample five ways. None of the groups have enough volume to reach significance. You’ve wasted a send and learned nothing.

Test one thing. Test it properly. Learn something. Move on.

Creating a “swipe file” for ongoing inspiration

I have a folder in my notes app called “Subject Lines.” It has hundreds of entries. Every time I see a subject line that makes me stop—whether it’s in my own inbox, a competitor’s email, a friend’s newsletter—I save it. I don’t save it to copy it. I save it to understand why it worked.

Some of them are curiosity gaps. Some are urgency plays. Some are just beautifully simple. The point is to build a library of patterns, not to steal specific lines.

I review this file before every major send. Not to find the perfect subject line, but to remind myself what’s possible. When you write subject lines every day, you fall into patterns. You start writing the same three or four structures over and over. The swipe file breaks that pattern. It shows you a structure you haven’t used in a while, a format you’d forgotten about.

I recommend anyone who writes email regularly do the same. Set up a folder. Every time you see a subject line that makes you open an email, save it. Every time you see one that makes you laugh or think or feel something, save it. Over time, you’ll build a collection that’s specific to your taste and your industry.

Then use it. Not to copy, but to learn. Ask yourself: why did this work? What pattern is it using? How could I apply that pattern to my audience and my offer?

Testing is the only universal truth

I’ve given you frameworks. Curiosity gaps. Behavioral personalization. Urgency. Problem-agitation-solution. Mobile optimization. Pronouns. Emojis. All of these are patterns I’ve seen work across hundreds of accounts and thousands of sends.

None of them are guarantees.

I’ve had curiosity gap subject lines fail. I’ve had urgency subject lines fall flat. I’ve had tests where “you” lost to “we” for reasons I still don’t fully understand. Every audience is different. Every offer is different. Every context is different.

The professionals understand this. The amateurs want a formula.

If someone tells you they know the perfect subject line formula for your business, walk away. They’re selling certainty they don’t have. The only universal truth in subject line strategy is that you have to test. Not once. Not quarterly. Constantly. Every send is an opportunity to learn something about what your audience responds to.

I test subject lines on every single broadcast I send. Every one. Even when I’m confident. Especially when I’m confident. Because I’ve been humbled enough times to know that my instincts are wrong about as often as they’re right.

The best subject line strategy isn’t about being clever. It’s about being disciplined. Set up the tests. Measure the results. Let the data tell you what works. Then test again.

The Preheader Text: Your Second Subject Line (That Most People Ignore)

Introduction – The Most Valuable Real Estate You’re Wasting

Defining the preheader (snippet text)

The preheader is that line of text you see next to or below the subject line when an email lands in your inbox. Sometimes it’s the first few words of the email body. Sometimes it’s a custom snippet. Sometimes it’s a garbage string of code that someone forgot to delete. But it’s always there, and it’s always visible, and most marketers treat it like an afterthought.

I’ve sat in strategy meetings where we spent forty-five minutes debating a single word in a subject line. The same meetings where the preheader was whatever the email template defaulted to. The disconnect never stops surprising me. You’ll have a team obsessing over the first thirty characters of the subject line while ignoring the eighty characters directly below it that also appear in the inbox preview. It’s like spending hours polishing the front door while leaving the windows boarded up.

The preheader serves two functions, whether you intend it to or not. First, it gives the email client something to display in the preview pane. Second, it gives the reader additional context before they decide to open. If you don’t tell it what to say, the email client will grab whatever it finds first in the email body. Usually that’s a “View in Browser” link or a line of footer text. Neither of those is going to convince anyone to open your email.

I learned to respect the preheader about eight years ago when I was running a campaign for a B2B software company. We had a subject line that tested well: “The feature you’ve been asking for.” Open rate came in at 12%. Disappointing. I pulled up the inbox view and saw that the preheader was pulling from the footer: “You are receiving this email because you signed up for…” That’s what people saw below the subject line. Not a helpful second pitch. Just legal boilerplate.

The next send, we customized the preheader to say: “It’s finally here. See how it works inside.” Open rate jumped to 24%. Same subject line. Same audience. Same offer. The only thing that changed was we stopped letting the email client decide what to show people.

That’s when I started treating preheaders as second subject lines. Because that’s exactly what they are.

Visualizing the inbox: Mobile vs. Desktop layouts

The way preheaders display depends entirely on where your email is being read. And most of your emails are being read on mobile.

On an iPhone, using the default Mail app, the preheader appears directly below the subject line in a smaller, lighter font. The subject line shows first. The preheader shows second. Both are visible without the user tapping anything. The same is true for Gmail’s mobile app, though the formatting is slightly different—subject line on top, preheader in a smaller font below.

On desktop, it varies. Gmail shows the subject line in bold with the preheader text in a normal weight, gray font, directly to the right. Outlook shows the subject line with the preheader text in a smaller font below or to the side depending on the version. Apple Mail on Mac shows the subject line and then the first line of the email body, which is usually your preheader if you’ve set one.

The key takeaway is that the preheader is visible in every major email client, on every device, without the user doing anything. It’s not hidden. It’s not secondary. It’s right there, in the preview, alongside the subject line, competing for the same three seconds of attention.

I’ve seen the data on this. Inbox previews show between 80 and 120 characters total, depending on the device and client. The subject line takes up the first 30-50 of those characters. The preheader takes up the rest. That means the preheader is responsible for anywhere from 40% to 70% of the preview real estate. If you’re not customizing it, you’re giving up more than half of your prime inbox real estate to whatever garbage your email template coughs up.

Strategic Functions of the Preheader

The complement strategy: Extending the subject line

The most straightforward way to use a preheader is to let it finish the thought your subject line started. The subject line hooks. The preheader gives the context that makes the hook land.

I use this constantly. The subject line is short, punchy, optimized for mobile truncation. The preheader carries the explanatory weight. Together, they form a complete sentence or thought that the reader can process in one glance.

Subject: “Black Friday Starts Now” | Preheader: “Extra 20% off for first 100 customers”

Here’s what’s happening in this pairing. The subject line announces an event. It’s broad. It tells you that something is happening, but not why you should care specifically. The preheader adds the urgency and the exclusivity. Now the reader knows: not only is the sale happening, but there’s a limited window and a limited quantity. The hook is extended into a full value proposition.

Another example from a SaaS client. Subject: “Your trial ends in 3 days.” Preheader: “Here’s what you’ll lose access to—and how to keep it.” The subject line creates urgency. The preheader adds a curiosity gap and a solution path. Together, they tell the reader: there’s a problem coming, and this email contains the answer.

I’ve found that the complement strategy works best when the subject line is the headline and the preheader is the subheadline. The subject line grabs attention. The preheader tells the reader why that attention is justified. They work as a system, not as two separate elements.

The mistake I see with complement strategy is redundancy. If the preheader just repeats what the subject line said, you’ve wasted the space. “Black Friday Starts Now” as the subject and “Our biggest sale of the year is here” as the preheader tells the reader the same thing twice. It doesn’t add information. It doesn’t deepen the hook. It just takes up space.

The contrast strategy: Adding missing context

Sometimes the subject line is a hook that needs context that doesn’t naturally extend from it. The contrast strategy uses the preheader to add a layer of information that changes how the subject line is interpreted.

I used this for a client in the financial space. The subject line was “You’re losing money.” That’s a strong hook. It’s also potentially alarming. The preheader we paired with it was “Not in the way you think—read this before you panic.” The preheader changed the tone from alarm to curiosity. The reader knew this wasn’t a fraud alert or a collections notice. It was marketing content framed around a financial insight.

Another example from a nonprofit client. Subject: “We need to talk.” Preheader: “It’s not bad news, we promise. But it is important.” The subject line alone could signal a crisis. The preheader defused that while maintaining curiosity. The open rate on that campaign was 52%.

The contrast strategy is useful when your subject line is provocative enough to create interest but potentially ambiguous enough to create the wrong expectation. The preheader guides the interpretation. It tells the reader what kind of email this is going to be, which reduces the friction of opening. People don’t open emails they’re uncertain about. The contrast preheader removes that uncertainty while preserving the hook.

The call-to-action (CTA) preheader: Telling them what to do

This is the most direct approach. The preheader tells the reader exactly what you want them to do, and why they should do it.

I’ve seen this work exceptionally well in transactional and promotional emails. The subject line announces the offer. The preheader tells the reader what action to take.

Example from an e-commerce client. Subject: “Your cart is waiting.” Preheader: “Click here to complete your purchase before these items sell out.” The preheader isn’t subtle. It’s not clever. It’s a direct instruction with a scarcity motivator attached.

Another from a webinar promotion. Subject: “How to scale your agency.” Preheader: “Save your seat now—space is limited to 100 attendees.” The subject line promises value. The preheader creates urgency and tells the reader how to claim that value.

The CTA preheader works because it removes ambiguity. The reader knows exactly what will happen when they open the email. There’s no mystery. There’s no fear of being sold to in a way they don’t want. The transaction is clear: open this email to take the action you already know you want to take.

Common Preheader Mistakes That Kill Opens

The default “View in Browser” or “If you cannot see this…”

I still see this constantly. Emails go out with the preheader set to whatever the email service provider automatically populated. Usually that’s a “View in Browser” link or the first line of the email body, which is often an “If you cannot see this email, click here” message.

Here’s what that looks like in the inbox. Subject line: “20% off your next order.” Preheader: “View this email in your browser.” The reader sees that and thinks: this is a generic marketing email. Nothing in that preheader tells them why they should open. It’s a utility message that adds zero value to the decision-making process.

I had a client whose preheader was defaulting to “Click here to add us to your address book” for six months. Six months. They were sending to a list of 80,000 people twice a week. Every single one of those sends had a preheader that said nothing about the content, the offer, or the value. That’s hundreds of thousands of wasted opportunities to add context and increase opens.

The fix is simple. You go into your email template and you add a custom preheader variable. Then you write a unique preheader for every send. It takes thirty seconds. There’s no excuse for default text.

Leaving it blank (a massive missed opportunity)

Some marketers know enough to avoid the default text but then make the opposite mistake. They leave the preheader field blank, assuming the email client will figure something out.

When you leave the preheader blank, the email client pulls the first text it finds in the HTML of the email body. Sometimes that’s a headline. Sometimes it’s a navigation link. Sometimes it’s a paragraph of body copy that makes no sense out of context. You’re leaving the decision to the algorithm, and the algorithm doesn’t care about your open rates.

I’ve seen preheaders that were pulled from navigation menus: “Shop Women Men Kids Sale About Us.” That’s what appeared below the subject line. The reader sees that and has no idea what the email is actually about. They’re not going to open to find out.

A blank preheader is a missed opportunity to control the narrative. You have the chance to put your message in front of the reader twice—once in the subject line, once in the preheader. If you leave the preheader blank, you’re choosing to only use one of those opportunities.

Repetition: Simply copying the subject line

This one makes me crazy. A marketer will write a subject line, then write the exact same thing in the preheader field. Subject: “Our biggest sale of the year.” Preheader: “Our biggest sale of the year.” That’s not a strategy. That’s a placeholder.

The preheader should add something. If it’s just repeating what the subject line already said, you’ve created redundancy without value. The reader gets the same information twice. Nothing new compels them to open.

I had a client argue that repeating the subject line in the preheader reinforced the message. I ran a test. The repeated preheader versus a complementary preheader that added context. The complementary preheader won by 18%. People didn’t need the message reinforced. They needed a reason to act.

Technical Implementation & Best Practices

How to code custom preheaders in major ESPs (Klaviyo, Mailchimp)

The technical implementation varies by platform, but the principle is the same across all of them. You need to add a hidden piece of HTML at the very top of your email body that contains your custom preheader text. This text is invisible in the email itself but appears in the inbox preview.

In Klaviyo, there’s a dedicated field in the email builder for preheader text. You find it in the campaign settings. Type your text, save it, and Klaviyo handles the HTML. It’s the easiest implementation of any major platform.

In Mailchimp, it’s slightly more involved. You need to use the “Settings” section of the campaign builder and look for the “Tracking” or “Options” tab. There’s a field labeled “Preheader” or “Snippet Text.” Enter your text there. Mailchimp will inject it into the email HTML.

In HubSpot, you’ll find preheader settings in the email editor under “Advanced Options.” In ActiveCampaign, it’s in the campaign settings under “Email Options.”

For any platform that doesn’t have a dedicated field, you need to add the HTML manually. The code looks like this:

text
<div style="display:none;font-size:1px;color:#ffffff;line-height:1px;max-height:0px;max-width:0px;opacity:0;overflow:hidden;">
Your preheader text goes here.
</div>

You put this at the very top of the email body, before any visible content. The text is hidden from the reader inside the email but visible to the email client in the preview pane.

I’ve had developers argue that this is hacky. It’s not. It’s the industry standard. Every major brand using email marketing does this. It’s how you control what appears in the preview without affecting the design of the email.

Character limits: The 40-90 character sweet spot

How long should a preheader be? The answer depends on where it’s being read, but I’ve settled on a range based on testing across hundreds of campaigns.

On mobile, the typical inbox preview shows between 40 and 90 characters for the preheader, depending on the device and the email client. That’s your effective range. Less than 40 characters and you’re leaving preview space unused. More than 90 and you risk truncation, which means the last part of your message gets cut off.

I aim for 60-80 characters on most sends. That’s enough room for a complete thought without running the risk of truncation in most clients. If I need more space, I’ll push to 90, but I test the preview in multiple clients to make sure the full message displays.

The one exception is when I’m using the complement strategy with a short subject line. If the subject line is 20 characters, I’ll sometimes let the preheader run to 100 or more because the combined preview length is longer than the individual elements. But I still check truncation.

The technical implementation matters here too. The hidden div method ensures that the preheader text doesn’t wrap or break in unexpected ways. If you’re just letting the email client pull from the visible body text, you have no control over where the truncation happens.

Treat the preheader as a headline, not an afterthought

The shift I’ve seen in the best email programs I’ve worked with is treating the subject line and preheader as a unified creative unit. They’re written together. They’re tested together. They’re optimized together.

I write subject lines and preheaders in pairs now. I don’t write one without the other. I’ll open a document and sketch out ten pairs, then whittle down to the strongest combinations. The subject line might be the hook. The preheader might be the context. Or the subject line might be the curiosity and the preheader the payoff. But they’re designed to work together.

This changes how you think about the creative process. You’re not writing a subject line and then finding some text to fill the preheader field. You’re writing two lines of copy that appear together in the inbox, that will be read together, that will be judged together. They need to feel like they belong together.

I’ve tested subject lines with and without complementary preheaders. The ones with complementary preheaders almost always win. Not because the preheader itself is compelling, but because the combined unit gives the reader more information and more confidence in opening.

The marketers who ignore preheaders are leaving money on the table. Not metaphorically. Literally. Every email you send without a customized preheader is an email that’s underperforming its potential. The work to fix it is minimal. The upside is measurable.

I had a client who started customizing preheaders after years of leaving them blank. Their overall open rate across all campaigns increased by 11% in the first month. No other changes. Same subject line strategies. Same send times. Same audiences. Just adding preheaders that complemented the subject lines.

Eleven percent. That’s not a small lift. That’s a significant improvement from thirty seconds of work per email.

The preheader is not a technical detail. It’s not a box to check. It’s the second line of your headline, and it deserves the same attention you give the first.

Sender Reputation & Deliverability: Why Your Emails Never Reach the Inbox

Introduction – The Foundation of Open Rates

You can’t open what you don’t receive

I’ve sat through too many post-mortem calls where a marketing team is tearing apart their subject lines, their preheaders, their send times, their content. They’ve been staring at an open rate that dropped from 22% to 9% and they’re convinced it’s creative. They rewrite everything. They test new formats. They bring in consultants to audit their copy.

And the whole time, the problem is that half their emails are going to spam.

This happens more often than most marketers want to admit. I’ve seen it at startups with lean teams where no one owns deliverability. I’ve seen it at established brands where the person who set up the DNS records left two years ago and no one has touched authentication since. I’ve seen it at agencies where the focus is always on the next campaign, never on the infrastructure that makes campaigns possible.

Here’s the reality. You can write the best subject line in the history of email marketing. You can nail the preheader. You can have an offer so compelling it would make your grandmother pull out her credit card. None of it matters if the email never lands in the inbox.

I had a client in the health and wellness space who was convinced their content was the problem. Open rates had been declining for six months. They hired me to rewrite their email strategy. First thing I did was run a deliverability audit. Their emails were hitting spam folders at Gmail at a rate of 38%. Outlook was worse. The content was fine. The infrastructure was a disaster. The previous agency had never set up proper authentication. The domain had been flagged for sending to purchased lists years ago and no one had ever cleaned it up.

We spent three months fixing deliverability before we touched a single subject line. By the time we were done, open rates were back to 24% on the same content they’d been sending for a year. The content hadn’t been the problem. The inbox placement had.

The difference between delivery and deliverability

I need to clear up a confusion that causes more wasted effort than almost anything else in email marketing. Delivery and deliverability are not the same thing.

Delivery is binary. Your email service provider tells you an email was delivered. That means the receiving server accepted the message. It doesn’t tell you what that server did with it. It could have gone to the inbox. It could have gone to the spam folder. It could have gone to a promotions tab that the user never checks. Your ESP will report it as delivered either way.

Deliverability is the actual placement. It’s whether your email landed where people can see it. You can have a 99% delivery rate and a 30% inbox placement rate. Your ESP dashboard will look great. Your actual results will be terrible.

I see this confusion constantly. A marketing manager will look at their ESP dashboard, see that delivery is 98%, and assume everything is fine. Then they wonder why open rates are low. The disconnect is that they’re measuring the wrong thing.

The only way to know your actual deliverability is to use inbox placement testing tools. These tools send test emails to a network of test inboxes across all the major providers—Gmail, Outlook, Yahoo, AOL, etc.—and report back where those emails landed. You do this before you send to your list. You do it regularly to monitor your reputation. You do it any time you change infrastructure.

If you’re not testing inbox placement, you’re flying blind.

The Technical Pillars of Trust

Authentication: SPF, DKIM, and DMARC explained simply

I’m going to explain these in a way that doesn’t require a computer science degree. You don’t need to understand the cryptography. You need to understand what each one does and why it matters to your inbox placement.

SPF, Sender Policy Framework, is a DNS record that says “these servers are allowed to send email for my domain.” It’s like a list of authorized senders. When Gmail gets an email from your domain, it checks the SPF record to make sure the server that sent it is on the authorized list. If it’s not, that’s a red flag.

DKIM, DomainKeys Identified Mail, is a digital signature. It attaches a cryptographic signature to every email you send. The receiving server checks that signature against a public key in your DNS records. If the signature is valid, the server knows the email hasn’t been tampered with in transit and that it actually came from you.

DMARC, Domain-based Message Authentication Reporting and Conformance, ties SPF and DKIM together. It tells receiving servers what to do if authentication fails. You can set it to do nothing, to quarantine the message, or to reject it outright. DMARC also gives you reporting on who is sending email using your domain, which helps you spot spoofing attempts.

These three records are the foundation of email authentication. Without them, you’re telling email providers that you don’t care enough about security to set up basic protections. They treat you accordingly.

Why Gmail and Yahoo now require DMARC

In 2024, Gmail and Yahoo announced new requirements for bulk senders. If you send more than 5,000 emails per day to Gmail or Yahoo addresses, you are now required to have SPF, DKIM, and DMARC properly configured. DMARC must be set to at least quarantine. No exceptions.

This wasn’t a suggestion. It was a mandate. And it changed the landscape overnight.

I had clients scrambling in early 2024 to get DMARC set up because they’d been operating for years without it. Some of them had never heard of DMARC. Some of them had it set up incorrectly. Some of them had it set to “none” which essentially meant it was doing nothing.

The providers made this change because email fraud was out of control. Spoofing, phishing, brand impersonation. DMARC is the primary defense against that. If you’re a legitimate sender and you don’t have DMARC configured, you look the same as a spammer to the algorithms.

If you’re not authenticated today, you’re not getting to the inbox. It’s that simple.

IP Reputation: Shared vs. Dedicated IPs

Every email you send comes from an IP address. That IP address has a reputation. That reputation is based on the sending behavior associated with that IP. Good behavior builds reputation. Bad behavior destroys it.

The risks of “bad neighbors” on shared IPs

When you use a shared IP pool, you’re sharing that reputation with every other sender on that IP. Most email service providers use shared IP pools by default. They put thousands of senders on the same IP addresses. If most of those senders are well-behaved, the IP reputation is good. If a few of them do something stupid—buy a list, get high spam complaints, send to spam traps—that affects everyone on that IP.

I’ve seen this play out multiple times. A client on a shared IP pool sees deliverability drop for no reason they can identify. Nothing changed on their end. But someone else on their shared IP pool got flagged. Now everyone on that IP is paying the price.

The solution is a dedicated IP. When you have a dedicated IP, your reputation is yours alone. If you’re well-behaved, you build a good reputation. If you’re not, you have no one to blame but yourself. The downside is that dedicated IPs start with no reputation. You have to warm them up. And if your volume is low, a dedicated IP can actually be worse than a shared pool because the reputation algorithms expect consistent volume.

I recommend dedicated IPs for senders who send at least 50,000 emails per month and have consistent volume. If you’re sending sporadically or in low volume, a shared IP from a reputable ESP is fine—but you need to choose an ESP that actively manages their shared IP pools and kicks out bad actors.

Engagement Metrics as Gatekeepers

How ISPs (Gmail, Outlook) use spam complaints

Here’s what email providers actually care about: what their users do with your emails.

Gmail doesn’t care about your subject lines. Gmail doesn’t care about your content. Gmail cares about whether its users mark your email as spam, whether they delete it without opening, whether they move it to the inbox from the promotions tab, whether they reply to it.

Every action a user takes is a signal. Positive signals tell Gmail you’re a legitimate sender worth delivering to the inbox. Negative signals tell Gmail you’re noise that should be filtered out.

The most damaging signal is the spam complaint. When a user clicks the “Report spam” button, that’s a direct vote against you. If enough people do it, your reputation tanks.

I’ve seen the thresholds. A spam complaint rate above 0.1% is concerning. Above 0.3% is dangerous. Above 0.5% and you’re in active trouble. Most ESPs will suspend your account if you hit 0.5% consistently.

The problem is that most marketers don’t know their spam complaint rate. Their ESP dashboard shows opens and clicks. It doesn’t always show complaints. You have to go looking. You have to check it. And if you see it creeping up, you have to figure out why.

In my experience, the leading cause of spam complaints is a mismatch between expectation and reality. Someone signed up for one thing and you’re sending something else. They expected weekly tips and you’re sending daily sales pitches. The subject line promised one thing and the email delivered another. When people feel tricked, they hit spam.

The role of spam traps

Spam traps are email addresses that exist only to catch spammers. They’re not real people. They don’t sign up for lists. They don’t engage. If you hit a spam trap, it means you’re doing something you shouldn’t be doing.

Pristine traps vs. recycled traps

There are two kinds of spam traps.

Pristine traps are email addresses that have never been used for anything. They’ve never signed up for a newsletter. They’ve never made a purchase. They exist only in the databases of the spam trap operators. If you hit a pristine trap, it means you acquired that email address through illegitimate means—scraping, buying lists, using some shady source. Hitting a pristine trap is a major black mark against your reputation.

Recycled traps are email addresses that were once valid but have been abandoned. The domain owner turned them into spam traps. If you hit a recycled trap, it means you’re not doing list hygiene. You’re still sending to addresses that haven’t been active for years. This is less severe than hitting a pristine trap, but it’s still a clear signal that you’re not maintaining your list properly.

I’ve seen senders hit spam traps because they never removed invalid addresses from their list. Bounces accumulate. The same addresses bounce month after month. Eventually, the domain owner may convert that dead address into a trap. Now you’re hitting a trap every time you send, and your reputation is getting dinged.

The fix is list hygiene. Remove addresses that bounce consistently. Remove addresses that haven’t engaged in 6-12 months. Don’t let dead addresses accumulate. Every dead address on your list is a potential trap.

List hygiene: Removing bots and inactive users

Bots are a growing problem in email marketing. Signup forms get hit by bots that submit fake email addresses. Those addresses don’t open emails. They don’t click. They don’t engage. But they’re on your list, and they’re hurting your metrics.

I worked with a B2B client whose list had 15% bot signups. They had a simple signup form with no CAPTCHA. Bots were hitting it constantly. Those bot addresses never opened emails, which drove down their overall open rate and sent engagement signals to ISPs that their list was low-quality.

We added CAPTCHA to the signup form and implemented a honeypot field—a hidden field that bots fill out but humans don’t. Bot signups dropped to near zero. Open rates among new subscribers increased by 25% because the new subscribers were actually real people.

The larger issue is inactive users. Every list has them. People who signed up, maybe engaged for a while, then drifted away. They’re not opening your emails. They’re not clicking. But you’re still sending to them, and every send to an inactive user is a negative engagement signal.

The solution is to set a threshold. I use six months for most clients. If someone hasn’t opened or clicked in six months, they go into a sunset flow. A series of emails designed to re-engage them. If they don’t respond, they get removed.

This feels scary to marketers who are judged on list size. But a smaller, engaged list is worth more than a large, dead list. The engaged list builds your reputation. The dead list destroys it.

How to Diagnose and Repair Your Reputation

Tools of the trade (GlockApps, Google Postmaster Tools)

If you’re not using these tools, you’re guessing.

Google Postmaster Tools is free and it’s essential if you send to Gmail addresses, which most of us do. It shows you your domain reputation, your IP reputation, your spam complaint rate, your authentication status, and your encryption status. It’s Gmail telling you, directly, what they think of your sending practices.

If you don’t have Google Postmaster Tools set up, stop reading this and go set it up. It takes five minutes. It gives you data you can’t get anywhere else.

GlockApps is a paid tool that does inbox placement testing. You upload your email or connect your ESP, and GlockApps sends it to a network of test inboxes across all the major providers. It tells you exactly where your email landed—inbox, spam, promotions, etc.—for each provider.

I run a GlockApps test before every major campaign. Not every send, but any time I’m sending to a large segment or testing something new. I want to know, before I hit send, whether my email is going to hit the inbox.

There are other tools. SendForensics, 250ok, MxToolbox. They all do variations on the same thing. The important part is using at least one of them regularly. Deliverability isn’t set-and-forget. It changes over time. You need to monitor it.

The IP warm-up process for new domains

If you’re sending from a new domain or a new dedicated IP, you cannot just start sending at full volume. You will hit spam folders immediately. The reputation algorithms need to see consistent, positive engagement before they trust you.

The warm-up process is simple but requires discipline. You start by sending low volumes to your most engaged subscribers. People who have opened and clicked recently. You send small batches, gradually increasing volume over days or weeks.

A typical warm-up schedule for a new IP might look like this. Day one: 500 emails. Day two: 1,000. Day three: 2,000. Day four: 5,000. Day five: 10,000. And so on until you’re at full volume. The exact numbers depend on your total volume and your engagement rates.

The key is that you need consistent engagement during warm-up. If you send to low-engagement segments during warm-up, you’re sending negative signals when the algorithms are most sensitive. Start with your best subscribers. Build positive history. Then gradually expand to less engaged segments.

I’ve seen senders skip warm-up entirely and then wonder why their open rates are 2%. The algorithms don’t trust them. They haven’t earned that trust.

Deliverability is earned, not given

This is the core truth that separates professional email programs from amateur ones.

Email providers don’t owe you inbox placement. You haven’t paid for it. Gmail isn’t a delivery service you subscribe to. You earn placement by sending emails that their users want to receive. You prove yourself over time. You build a track record of positive engagement and low complaints. You authenticate your infrastructure. You maintain your list. You monitor your metrics.

And if you stop doing those things, if you let your list decay, if you get sloppy with authentication, if you send to low-quality segments, the algorithms will notice. Your reputation will decline. Your inbox placement will suffer. And getting it back is much harder than keeping it in the first place.

I’ve helped clients recover from damaged reputations. It takes months. You have to identify the problem. Clean your list. Fix your authentication. Warm up new infrastructure if necessary. Send only to engaged users. Monitor every campaign. Watch your spam complaints like a hawk. Slowly, over time, the reputation rebuilds.

It’s faster to keep your reputation than to rebuild it.

The marketers who succeed at email long-term are the ones who treat deliverability as a core competency, not a technical detail they can outsource to their ESP. They understand that every send is a vote. Positive votes build trust. Negative votes destroy it. And the inbox placement that results from that trust is the foundation everything else is built on.

Segmentation & Personalization: The Silver Bullet for Engagement

Introduction – Spray and Pray is Dead

Why generic blasts have declining open rates

I started my career in email marketing during what I now think of as the blast era. You had a list. You had a message. You sent it to everyone. Maybe you segmented by something simple—country, maybe—but mostly you just hit send and hoped.

That era is over. It’s been over for years, but some marketers haven’t gotten the memo.

I see it all the time. A brand with fifty thousand subscribers sends the exact same email to every single person on their list. The founder who signed up for product updates gets the same email as the customer who bought last week gets the same email as the person who downloaded a white paper three years ago and never engaged again. The open rates are in the toilet. The unsubscribe rate is climbing. And the marketing manager is baffled.

Here’s what’s happening. People are drowning in email. The average office worker gets over a hundred emails a day. Their personal inbox is another fifty. They have learned, through years of conditioning, to ignore anything that doesn’t feel immediately relevant to them. A generic email from a brand they bought from once? That’s noise. It gets deleted without a glance.

I had a client in the home goods space who was sending a weekly newsletter to their entire list of eighty thousand. Open rates had been dropping for two years. They were at 9% when I came in. I asked them what the newsletter was about. “New products, design inspiration, sales,” they said. Did they think a customer who just bought a dining table cared about new dining tables? Did they think someone who had never bought anything cared about design inspiration for a home they might not own? The answer was no, but they hadn’t thought about it that way.

We stopped the blast. We started segmenting. Within three months, overall open rates on segmented campaigns were at 22%. The newsletter continued, but only for people who had shown interest in content. Everyone else got emails tailored to where they were in the customer journey. The 9% blast rate was a symptom. The disease was treating everyone the same.

The data: Segmented campaigns drive 14-30% higher opens

I don’t need to rely on my own experience for this. The data is overwhelming. Mailchimp analyzed billions of emails and found that segmented campaigns had 14% higher open rates than non-segmented campaigns. Other studies put the number even higher. I’ve seen lifts of 30% or more in my own work when segmentation is done right.

The reason is simple. When you segment, you’re sending fewer emails to each person. But the emails you send are more relevant. And relevance drives engagement.

I ran a test for a B2B SaaS client a few years ago. We had a list of about forty thousand leads. Some were active trial users. Some were past customers. Some were cold leads who had downloaded content but never started a trial. We had been sending the same monthly newsletter to all of them.

We created three segments. Trial users got emails focused on product tutorials and implementation tips. Past customers got emails about new features and upgrade paths. Cold leads got emails with case studies and ROI content. The same number of emails overall. The open rates on the segmented sends were 28%, 24%, and 19% respectively. The previous blast open rate was 14%.

The numbers don’t lie. When you send people what they actually want, they open it. When you send them what you want to send them, they ignore it. Segmentation is the bridge between your messaging goals and your audience’s attention.

The Layers of Segmentation

Demographic segmentation (Location, age, gender, job title)

Demographic segmentation is the shallow end of the pool. It’s where most marketers start, and for good reason. It’s easy. The data is usually available. And it works.

Location is the most straightforward. If you have a physical business, if you host events, if your product is seasonal, location segmentation is essential. I worked with a brand that sold winter gear. They had customers in Minnesota and customers in Florida. Sending the same “prepare for winter” email to both groups was actively stupid. We segmented by region and saw open rates on winter campaigns increase by 35% in warm climates because we stopped sending them irrelevant content.

Age and gender are trickier. They can be useful if your product genuinely appeals differently across demographics. But I’ve seen marketers make assumptions that weren’t supported by the data. “Our product is for women, so we’ll send differently to men.” Except when we looked at the data, the men who bought were just as engaged with the same content. The assumption was wrong. Test before you assume.

Job title is critical for B2B. A CTO and a junior developer might both be on your list, but they care about different things. The CTO cares about ROI, security, scalability. The developer cares about documentation, API, ease of implementation. Sending them the same content is doing a disservice to both.

Behavioral segmentation (The gold mine)

Demographics tell you who someone is. Behavior tells you what they want. Behavioral segmentation is where the real power is.

Past purchase behavior

What someone bought tells you what they’re likely to buy next. This is the foundation of every good e-commerce email program.

I managed email for a beauty brand with hundreds of SKUs. A customer who bought a moisturizer is in a different category than a customer who bought a lipstick. The moisturizer customer cares about skincare routines, ingredients, hydration. The lipstick customer cares about color, finish, wear time. Sending them the same promotional emails was leaving money on the table.

We built segments based on product categories purchased. Skincare customers got skincare emails. Makeup customers got makeup emails. The cross-sell emails—”you bought moisturizer, here’s the serum that pairs with it”—went to the skincare segment. Open rates on those cross-sell emails were 34%. The generic promotional emails were opening at 16%.

Past purchase behavior also tells you who your high-value customers are. Someone who buys once a year is different from someone who buys once a month. They deserve different treatment. The high-frequency buyer might be interested in a loyalty program, early access, exclusive offers. The low-frequency buyer might need a reason to come back.

Website browsing activity

Purchase behavior tells you what someone did. Browsing activity tells you what they almost did.

This is where tools like Klaviyo and customer data platforms become essential. You can see what products someone viewed, what categories they spent time in, whether they added something to cart and abandoned it. All of that data can drive segmentation.

I worked with a furniture brand where browsing activity was the primary segmentation driver. Someone who looked at sofas got emails about sofas. Someone who looked at lighting got emails about lighting. The generic newsletter was dying. The behavior-based sends were thriving.

The most powerful application is the abandoned cart. But that’s a category of its own. The broader point is that browsing activity tells you about intent that hasn’t yet converted. It’s a signal of interest. If you’re not using it to segment, you’re ignoring a gold mine.

Email engagement history (active vs. inactive)

This is the simplest behavioral segmentation and the one most marketers ignore. Active engagers and inactive engagers should not get the same emails.

Active engagers—people who open and click regularly—are your most valuable subscribers. They want to hear from you. They’re likely to convert. They should get your best offers, your new product announcements, your high-value content. They’ve earned it.

Inactive engagers—people who haven’t opened in 90 days—are a different story. They don’t want your regular emails. Sending them the same campaigns you send to active users is just adding noise to their inbox. They need a different approach. A re-engagement series. A question about their preferences. A sunset flow that removes them if they don’t respond.

I had a client who was sending the exact same campaigns to active and inactive users. Their overall open rate was 12%. When we separated the segments, the active segment was opening at 34%. The inactive segment was opening at 3%. The 12% average was hiding the fact that their active users were actually highly engaged and their inactive users were dead weight.

We built a re-engagement flow for the inactive segment. Some of them came back. The ones who didn’t got removed. The overall open rate for the remaining list jumped to 28% because we stopped diluting our metrics with dead addresses.

Advanced Segmentation Strategies

The “Last Click” segment (Complementary product upsells)

This is one of my favorite segmentation strategies because it’s so obviously effective and so rarely used.

Someone just bought something. That purchase is the strongest signal of interest you’re ever going to get from them. They’re engaged, they have their credit card out, they’re in a buying mindset. The question is: what do you send them next?

Most brands send a “thank you” email. Maybe a shipping confirmation. Maybe a cross-sell email a week later. That’s fine. But the best practice is to send them an email that offers the natural complement to what they just bought.

If someone bought a camera, send them lenses. If someone bought a suit, send them a tie. If someone bought a course on SEO, send them the course on content marketing. The purchase tells you what they’re interested in. The “last click” segment uses that signal to offer the next logical step.

I managed a client in the outdoor gear space. They sold backpacks, tents, sleeping bags, hiking boots. When someone bought a tent, they got an email about sleeping bags. When someone bought a backpack, they got an email about hydration packs. The open rates on these “last click” emails were consistently above 40%. The conversion rates were higher than any other promotional email they sent.

The reason it works is simple. You’re not guessing what they might want. They told you. They bought a tent. They’re going camping. They need a sleeping bag. You’re not selling to a demographic. You’re selling to a specific need that you know exists because of a recent action.

Lifecycle stage (New lead vs. Loyal customer)

A new lead and a loyal customer are different people. They have different relationships with your brand. They have different needs. They deserve different emails.

New leads need education. They need to understand what you do, why it matters, how it works. They need social proof. They need low-friction offers that get them to take the next step. A new lead who gets a “welcome to the family” email followed by a hard sell on your most expensive product is being asked to move too fast.

Loyal customers need recognition. They need to feel valued. They need exclusive offers, early access, behind-the-scenes content. They need to be reminded why they chose you in the first place. A loyal customer who gets the same generic newsletter as a new lead feels taken for granted.

I worked with a subscription box company that had a clear lifecycle problem. Their churn rate was high in months three through six. When we looked at the emails customers were receiving during that period, they were the same onboarding emails new customers got. There was no recognition that these were people who had been around for months. They were being treated like new leads, and they were leaving.

We built a lifecycle segmentation system. Months 1-2: onboarding and education. Months 3-6: engagement and value reinforcement. Months 6-12: loyalty perks and upsell opportunities. Month 12+: retention and win-back. The churn rate dropped by 22% in the first six months.

Psychographic segmentation (Interest-based groups)

This is the deepest level of segmentation. It’s harder to do because it requires you to infer or ask about interests. But when it works, it works better than anything else.

Psychographic segmentation groups people by values, interests, lifestyle, personality. Why they buy, not just what they buy.

I had a client in the supplement space with a broad product line. Some customers were athletes looking for performance optimization. Some were older adults looking for joint health and longevity. Some were wellness enthusiasts looking for general health. These groups had different motivations, different concerns, different communication preferences.

We built a preference center. When people signed up, we asked them what they were interested in. Performance. Longevity. General wellness. Then we sent them content and offers aligned with that interest.

The performance group got emails about athletic recovery, training optimization, competition nutrition. The longevity group got emails about healthy aging, inflammation, mobility. The wellness group got broader content about overall health, stress management, sleep.

Open rates on the interest-based sends were 40-50% higher than the generic sends. The engagement was deeper. The conversion rates were higher. Because we were speaking to the part of them that cared, not just the demographic bucket they fit into.

Automation Flows: Segmentation on Autopilot

The Welcome Series (Highest open rates)

The welcome series is the most important email flow you will ever build. It also has the highest open rates of any emails you send, often 40-60% or more. People just gave you their email address. They’re expecting to hear from you. They’re paying attention.

The segmentation opportunity here is massive.

The simplest welcome segmentation is based on where someone signed up. Someone who signed up through a discount popup is different from someone who signed up through a content download. The discount seeker wants to save money. The content downloader wants information. Their welcome series should reflect that.

I worked with a financial education company that had two primary acquisition channels. One was a free budgeting template. The other was a webinar about investing. People who came in through the budgeting template got a welcome series focused on spending habits, saving, cash flow management. People who came in through the webinar got a welcome series focused on investing basics, portfolio construction, market education.

The open rates on both series were above 55%. The engagement was high. The conversion rates to paid products were significantly higher than the generic welcome series they had been using before.

Welcome series also allow you to capture additional segmentation data. You can ask questions in the welcome emails. “What are you most interested in?” “What’s your biggest challenge right now?” “What goal are you working toward?” The people who are most engaged—the ones opening and clicking through the welcome series—are the ones most likely to give you this data. And that data feeds into future segmentation.

Abandoned Cart (Urgency + segmentation)

Abandoned cart emails are the workhorse of e-commerce email. They work. But most brands send the same abandoned cart flow to everyone, and that’s a missed opportunity.

The content of your abandoned cart email should depend on what was in the cart.

A high-value cart—someone who abandoned a $500 order—deserves a different approach than someone who abandoned a $20 order. The high-value cart might get a personal email from a sales representative. The low-value cart might get a discount code.

A cart with a single item is different from a cart with multiple items. The single-item cart might need product education. The multi-item cart might need a reminder of the total value they’re leaving behind.

A first-time buyer’s abandoned cart is different from a repeat buyer’s abandoned cart. The first-timer might need trust signals, reviews, reassurance. The repeat buyer might just need a reminder.

I ran abandoned cart segmentation for a fashion retailer. Customers who had purchased before got a simple reminder email with no discount. New customers got a 10% discount code. The repeat buyers converted at 18% without the discount. The new customers needed the incentive. Sending the discount to repeat buyers would have been leaving money on the table.

Post-Purchase follow-up

Post-purchase is where most email programs go to die. They send a confirmation, maybe a shipping notification, and then they go silent until the next promotional campaign. That’s a mistake.

The post-purchase window is the highest-engagement period in the customer lifecycle. Someone just gave you money. They’re excited. They’re paying attention. What you send them in the days after purchase shapes their entire relationship with your brand.

Segmentation in post-purchase is about what they bought.

A consumable product—coffee, skincare, supplements—should trigger a replenishment sequence. Tell them when to reorder. Offer a subscription. Keep them in a buying cycle.

A durable product—furniture, electronics, clothing—should trigger a care and maintenance sequence. How to clean it. How to get the most out of it. What accessories they might need.

A high-consideration product—software, courses, complex equipment—should trigger an onboarding sequence. How to set it up. How to use it. What to do if they get stuck.

I worked with a cookware brand that transformed their post-purchase experience with segmentation. Someone who bought a cast iron skillet got emails about seasoning, cleaning, recipes that work best in cast iron. Someone who bought a non-stick pan got emails about care, what utensils to use, recipes that work best in non-stick. The engagement on these segmented post-purchase emails was three times higher than their promotional sends. And the repeat purchase rate increased by 30% because customers felt supported, not just sold to.

Relevance is the only thing that beats the spam folder

I’ve spent years testing subject lines, preheaders, send times, frequency. All of that matters. But none of it matters as much as relevance.

If someone opens your email and finds something they actually care about, they’ll keep opening. If they open three times and find things they don’t care about, they’ll stop. It’s that simple. The spam folder is where irrelevant email goes to die. The inbox is where relevant email lives.

Segmentation is how you make your email relevant. It’s not a tactic. It’s not a channel strategy. It’s the fundamental recognition that your audience is not one thing. They’re different people with different needs, different interests, different relationships with your brand. The email that excites one person annoys another. The offer that converts one person insults another.

The brands that win at email are the ones that treat their list as a collection of individuals, not a broadcast audience. They build systems that learn what each subscriber wants. They send different things to different people. They respect the fact that relevance is earned, not assumed.

I’ve never seen a brand over-segment. I’ve seen plenty under-segment. The lift from adding one layer of behavioral segmentation is almost always significant. The lift from adding a second layer is usually significant too. The work scales. The returns scale.

Spray and pray is dead. It died because people got too much email and learned to ignore anything that doesn’t feel like it’s for them. The only way back into their attention is to send them something that actually is for them. Segmentation is how you do that.

The Clean List Strategy: Why Unsubscribes Are a Good Thing

Introduction – The Hoarder’s Dilemma

Why a large list is a vanity metric

I’ve sat in boardrooms where a marketing director pulled up a slide showing list growth. Big green arrow pointing up. Subscriber count: 150,000. Everyone nodded. Everyone clapped. The CEO said “great work growing the list.” Nobody asked about engagement. Nobody asked about revenue per subscriber. Nobody asked how many of those 150,000 people had opened an email in the last year.

This happens constantly. Marketers are rewarded for list size. Agencies bill based on list size. Platforms charge based on list size. The entire industry has built incentive structures that encourage hoarding.

Here’s what I’ve learned after managing email programs for over a decade. List size is one of the most misleading metrics in marketing.

I took over a program once where the list was 200,000 people. The previous marketer had been there for four years and had never removed a single inactive subscriber. Every month, they added new people. They never subtracted anyone. The list had grown from 50,000 to 200,000. On paper, it looked like success.

When I ran the actual numbers, 140,000 of those 200,000 people hadn’t opened an email in over a year. They weren’t just inactive. They were ghosts. They weren’t going to buy anything. They weren’t going to engage. They were just there, sitting on the list, costing money and damaging the sender reputation.

The open rate on the last campaign before I started was 8%. After we cleaned the list—removed the 140,000 ghosts—the open rate on the next campaign was 34%. Same content. Same subject lines. Same send time. The only difference was we stopped sending to people who weren’t listening.

The 200,000 number was a lie. It was a vanity metric that hid the truth. The real list was 60,000 people. The other 140,000 were dead weight.

The financial cost of sending to ghosts

Every email you send costs money. Most ESPs charge based on the number of subscribers or the number of emails sent. If you’re paying for 200,000 subscribers but only 60,000 are actually reachable, you’re wasting money. That waste compounds over time.

I worked with a B2B SaaS company that was spending $3,000 a month on their email platform. They had a list of 90,000. When we cleaned the list, we removed 40,000 inactive subscribers. Their monthly bill dropped to $1,800. They saved $14,000 a year just by removing people who weren’t opening emails.

The cost isn’t just financial. Every send to an inactive subscriber is a risk to your deliverability. Email providers watch engagement. When you send to someone who hasn’t opened in a year, you’re telling Gmail that your content isn’t valuable enough to earn attention. That signal accumulates. Over time, it affects your ability to reach the people who do want to hear from you.

The hoarder’s dilemma is this. You think you’re protecting value by keeping subscribers. You’re actually destroying value by keeping them.

The Dangers of Inactive Subscribers

How inactivity hurts your sender reputation

Email providers don’t just look at spam complaints and bounces. They look at what people do with your emails. Or more importantly, what they don’t do.

When someone consistently doesn’t open your emails, that’s a signal. It tells the algorithm that your content isn’t relevant to that user. If enough of your sends go to people who don’t open, the algorithm starts to assume your content isn’t relevant to anyone. Your overall sender score drops. Your inbox placement suffers.

I’ve seen the data on this. For one client, we compared deliverability before and after list cleaning. Before cleaning, their inbox placement at Gmail was 62%. After removing inactive subscribers, it jumped to 88%. The same content. The same sending infrastructure. The only change was we stopped sending to people who weren’t going to open.

The algorithm doesn’t know that the people you’re sending to haven’t engaged in two years. It just sees that you’re sending to a lot of people who don’t open. That looks like a spammer. Spammers don’t care about engagement. They just blast. The algorithm lumps you in with them.

The high risk of spam complaints from forgotten subscribers

Here’s something that happens more often than you’d think. Someone subscribes to your list. They lose interest. They stop opening your emails. But they don’t unsubscribe. Your emails keep coming. Months pass. A year passes. One day, they’re cleaning out their inbox and see your email. They don’t remember signing up. They don’t remember who you are. They hit “Report spam.”

That spam complaint hurts you. It hurts you more than the hundred people who just deleted the email. Spam complaints are the strongest negative signal in the deliverability algorithms. A single spam complaint can outweigh hundreds of opens.

I had a client in the travel space who was sending to a list of 80,000. Their spam complaint rate was 0.3%—dangerously high. When we looked at the complaints, they were almost entirely coming from people who had been on the list for over two years and hadn’t opened in at least eighteen months. These weren’t angry customers. They were people who had forgotten they ever signed up. They saw an email from a travel company they didn’t remember and assumed it was spam.

We removed those old, inactive subscribers. The spam complaint rate dropped to 0.05% the next month.

Inactive subscribers don’t just ignore you. Eventually, some of them will actively hurt you. The longer you keep them on your list, the more likely they are to become spam reporters.

Skewed analytics: Hiding your true engagement rates

This is the insidious one. Inactive subscribers don’t just hurt your reputation and your costs. They hide the truth about your email program.

When you have a list full of ghosts, your open rate looks worse than it actually is. You think your subject lines aren’t working. You think your content isn’t resonating. You make changes based on that bad data. You optimize for an audience that doesn’t exist.

I worked with a brand that was convinced their content was the problem. Open rates had been dropping for two years. They’d rewritten everything. They’d tested new formats. Nothing helped.

When we cleaned their list, the open rate on the active segment was 28%. The overall open rate before cleaning was 11%. The content was fine. The problem was they were measuring themselves against a list that was mostly dead.

Here’s what happens when you don’t clean your list. You see an 11% open rate. You think you need to improve your subject lines. You run tests. You see a 12% open rate on a new subject line. You declare victory. But you’re optimizing within a tiny fraction of your actual potential. The 11% is a ceiling created by your list composition, not a floor you’re trying to raise.

When you clean the list, you see what your email program can actually do. The 28% open rate becomes your new baseline. Now you can optimize from there. You can test subject lines against a real audience. You can see what actually works.

The inactive subscribers are a fog. They obscure everything. You can’t see your program clearly until you remove them.

How to Define and Target Inactivity

Establishing your inactivity window (90 days, 6 months, 1 year)

The first question I get when I recommend list cleaning is always the same: how long is too long?

There’s no universal answer. It depends on your business, your send frequency, and your industry.

For a daily newsletter, someone who hasn’t opened in 30 days is probably gone. For a monthly email, 90 days might be the right threshold. For a seasonal business, six months might be fine.

I use a simple framework. Look at your typical customer journey. How often do people buy? How often do people engage? Set your inactivity window at twice your normal purchase cycle or engagement cycle. If most of your customers buy every three months, set your window at six months. If they engage monthly, set it at 60 days.

I had a client in the furniture space. People bought furniture every few years. Their email engagement was sporadic. Someone might not open for six months, then open when they started thinking about a new sofa. A 90-day window would have removed people who were still potential customers. We set it at 12 months. That worked for them.

Another client in the meal kit space. People bought weekly. If someone hadn’t opened in 60 days, they were never coming back. We set the window at 60 days and removed aggressively.

The right window is the one that balances two things. You want to remove people who are truly gone so they stop hurting your metrics. But you don’t want to remove people who are just slow-moving. Look at your data. Find the point where engagement drops off and never recovers. That’s your window.

Segmenting “lurkers” vs. “dead leads”

Not all inactive subscribers are the same. Some are dead. Some are just lurking.

Dead leads are people who haven’t engaged in your full inactivity window. They’re not opening. They’re not clicking. They’re not buying. They’re gone. These are the people you eventually remove.

Lurkers are people who haven’t opened in a while but might come back. They signed up for something specific. They engaged early. They haven’t bought yet, but they haven’t left either.

I treat these two groups differently. Lurkers go into a re-engagement sequence. We try to wake them up. Dead leads get removed.

The segmentation matters because you don’t want to send your aggressive re-engagement emails to people who are still active in other ways. Someone who’s opening every email but not clicking? That’s not a lurker. That’s a reader. Different treatment.

I build segments based on both recency and frequency. Someone who opened in the last 90 days is active. Someone who hasn’t opened in 90-180 days is a lurker. Someone who hasn’t opened in over 180 days is dead. The exact thresholds vary by business, but the principle is consistent. You treat people differently based on how far they’ve drifted.

The Re-engagement (Win-Back) Campaign

Strategy: The 3-email sequence

The re-engagement campaign is your last chance to wake up inactive subscribers before you remove them. It’s not a promotional email. It’s a conversation. A check-in. A question.

I’ve run dozens of these campaigns. The three-email structure is the one that consistently works best. Any more than that and you’re just annoying people who’ve already shown they don’t want to hear from you. Any less and you haven’t given them enough chances to respond.

Email 1: “We miss you” (Soft reminder)

The first email is gentle. It acknowledges the absence without blame. It reminds them why they signed up. It asks if they still want to hear from you.

Subject lines I’ve used for this first email: “We miss you.” “Haven’t seen you in a while.” “Is it us or is it you?” Something light. Something that doesn’t feel like a hard sell.

The body is simple. “We noticed you haven’t opened our emails in a while. That’s okay. But we want to make sure you’re still getting what you signed up for. If you want to keep hearing from us, click here. If not, no hard feelings.”

This email usually gets a 5-10% click rate. Not huge. But the people who click are the ones who want to stay. They’re lurkers who just needed a nudge.

Email 2: “Here’s what’s new” (Value showcase)

If they didn’t respond to the first email, they get the second one. This one is different. It’s not asking for permission. It’s reminding them why they signed up in the first place.

Subject lines: “What you’ve missed.” “Here’s what’s new.” “The best of [the last six months].”

The body highlights the best content, products, or features from the period they’ve been inactive. Case studies. New product launches. Customer stories. Things that might rekindle the interest that made them sign up originally.

This email is a value showcase. It’s not asking for anything. It’s saying “look at what you’ve been missing.” If they engage with anything—click a link, view a product—they come out of the inactive segment and go back into the active flow.

This email usually gets a 3-5% click rate. Lower than the first one, but the people who click here are often more valuable. They didn’t just click to stay. They clicked because something interested them.

Email 3: “We hate to see you go” (The ultimatum)

The third email is the breakup. It’s honest. It’s direct. It tells them that if they don’t respond, you’ll remove them from the list.

Subject lines: “We hate to see you go.” “Last chance to stay.” “Are you still interested?”

The body explains that you’re going to stop sending emails to people who aren’t engaged. It’s not a threat. It’s a courtesy. “We don’t want to clutter your inbox. If you want to keep hearing from us, click here. If not, we’ll take you off the list. No hard feelings either way.”

This email usually gets a 1-2% click rate. Small. But these clicks are often the most valuable. They’ve had two chances to leave and they’ve stayed. They’re choosing to be there.

After the third email, anyone who hasn’t clicked is removed. They’re not coming back. Keeping them is just costing you money and reputation.

The graceful removal process

Removing subscribers feels scary the first time you do it. You’ve spent years building that list. You’re about to delete a chunk of it. It feels like you’re throwing away value.

You’re not. You’re throwing away dead weight.

The removal process should be clean and final. I don’t keep a separate “suppressed” list that I might send to again someday. They’re gone. If they want to come back, they can sign up again. And if they sign up again, they’re starting fresh. That’s fine.

The removal itself is simple. You export the list of subscribers who didn’t engage in the re-engagement campaign. You suppress them in your ESP. You stop sending to them. That’s it.

I’ve had clients ask if they should keep these subscribers in a separate list for occasional “win them back” sends. No. If the three-email sequence didn’t work, another email in six months isn’t going to work either. Let them go.

A smaller, engaged list is more profitable than a large, dead one

I’ve done the math on this across dozens of accounts. The pattern is always the same.

Before cleaning, you have a large list. Low open rates. Low click rates. Low conversion rates. High costs. Mediocre deliverability.

After cleaning, you have a smaller list. Higher open rates. Higher click rates. Higher conversion rates. Lower costs. Better deliverability.

The revenue per subscriber goes up. The cost per subscriber goes down. The net result is usually the same or better revenue with lower expenses. And the revenue is more predictable because you’re dealing with people who actually want to hear from you.

I had a client who was terrified to clean their list. They had 120,000 subscribers. They’d been sending to everyone for years. Their open rate was 9%. Their revenue from email was flat.

We cleaned the list. Removed 70,000 inactive subscribers. The list dropped to 50,000. The CEO panicked. “We just lost more than half our audience.”

The next month, open rates were 31%. Click rates were up 4x. Revenue from email increased by 20%. With half the list.

Because the 50,000 people who remained were the ones who actually wanted to be there. They opened. They clicked. They bought. The 70,000 we removed were never going to buy anything. They were just a drag on the numbers.

A smaller, engaged list is more profitable than a large, dead one. Every time. I’ve never seen an exception to this rule.

The marketers who hoard subscribers are protecting a vanity metric. They’re afraid to see their list size go down because they’ve been rewarded for growth. But the growth they’re protecting is fake. It’s growth that doesn’t produce results.

The professionals clean their lists. They remove the ghosts. They accept that unsubscribes and removals are not failures. They’re the cost of doing business with people who actually want to hear from you.

Every unsubscribe is someone who was never going to buy anyway finally admitting it. Every removal is a weight lifted off your sender reputation. Every list clean is an investment in deliverability, engagement, and revenue.

Stop hoarding. Start cleaning. Your open rates will thank you. Your revenue will thank you. Your deliverability will thank you. And the 60,000 people who actually want to hear from you will finally get the attention they deserve.

Timing & Frequency: Finding the Sweet Spot for Your Audience

Introduction – The Rhythm of Engagement

Why timing matters as much as content

I learned this lesson in 2016, and I’ve never forgotten it. I was running email for a B2B software company. We’d spent weeks on a campaign. Great subject line. Strong content. Clear CTA. We sent it on a Tuesday at 10 AM because that’s what every blog post said to do. Open rate came in at 14%. Fine. Not great.

A few months later, I was traveling in Europe and forgot to schedule a send. I was jet-lagged, awake at 3 AM Eastern, and on impulse I sent the next campaign at 4 AM. Open rate hit 29%. Same content. Same audience. Different time.

That was the day I stopped believing in universal best times.

I started digging into the data. What I found was that our audience—B2B decision-makers—was opening emails in two distinct windows. Early morning, before the workday started. And late evening, after the workday ended. The 10 AM send was landing in the middle of meetings, the middle of the morning rush. It was getting lost.

Timing isn’t a minor variable. It’s a primary variable. It can double your open rates. It can cut them in half. And the optimal timing for your audience is specific to your audience.

I’ve run tests on send time across dozens of accounts. The range of optimal times I’ve seen is staggering. One B2B audience performed best at 6 AM. Another at 8 PM. A consumer brand saw their best results on Sunday afternoons. A nonprofit found that Tuesday at 11 AM was their peak.

There is no universal best time. There’s only your audience’s best time. And you have to find it.

The concept of subscriber fatigue

Timing is one variable. Frequency is the other. And frequency is where most marketers get into trouble.

Subscriber fatigue is real. It’s not a theory. It’s the measurable decline in engagement that happens when you send more email than your audience wants to receive.

I’ve seen the curve play out many times. A brand increases send frequency from twice a week to daily. For the first few weeks, open rates hold steady. Maybe even increase slightly, because they’re capturing more attention windows. Then, around week four or five, the open rate starts to drop. By week eight, it’s down 20-30%. Unsubscribes spike. Spam complaints increase.

The audience is telling you something. You’re sending too much.

The problem is that the signals are lagging. When you increase frequency, the negative effects don’t show up immediately. People don’t unsubscribe after one extra email. They tolerate it for a while. Then one day, they’ve had enough. They hit unsubscribe. Or they start ignoring you. Or they mark you as spam.

By the time you see the drop, the damage is done. You’ve trained a segment of your audience to ignore you. Getting them back is harder than keeping them in the first place.

I’ve learned to treat frequency with the same respect I treat subject lines. It’s not a secondary decision. It’s a strategic one. Send too little, you leave engagement on the table. Send too much, you burn the audience. The sweet spot is narrow. And it moves.

Debunking the “Best Time to Send” Myth

Why generic studies (Tuesday 10 AM) are misleading

I have a file folder of these studies. “The best time to send email is Tuesday at 10 AM.” “The best day is Thursday.” “Weekends are dead.” Every email platform publishes them. Every marketing blog repeats them. And they’re almost useless for individual businesses.

Here’s why. These studies aggregate data across thousands or millions of senders. They average everything. The Tuesday at 10 AM result is the average of B2B and B2C, of retail and SaaS, of small businesses and enterprises, of audiences in New York and audiences in Tokyo. It’s the average of everything, which means it’s specific to nothing.

I had a client in the outdoor gear space who saw their highest engagement on Saturday mornings. People were planning weekend hikes, checking gear, dreaming about adventures. Tuesday at 10 AM, they were in meetings. Sending on Tuesday would have been a waste.

Another client in the financial services space saw peak engagement on weekday evenings. Their audience was professionals who didn’t check personal email during work hours. They opened at night, after the kids were in bed, when they had time to think about their finances.

Both of these audiences would have been poorly served by the “best time” from a generic study.

The studies also suffer from survivorship bias. They only measure sends that happened. If most people send on Tuesday at 10 AM, Tuesday at 10 AM will look like the best time, because that’s when most emails are sent. But that doesn’t mean it’s the best time for your audience. It just means it’s the most common time.

I’ve seen marketers base their entire send schedule on these studies. They send on Tuesday at 10 AM because the study said to. They never test anything else. They never learn that their audience might prefer Wednesday at 7 PM. They’re optimizing for the average of everyone else, not for their own people.

Time zone confusion: The global audience problem

The time zone issue is where most send-time strategies fall apart.

If you have a national audience, you have at least three time zones to contend with. If you have an international audience, you have dozens. Sending at 10 AM Eastern means someone in California gets your email at 7 AM. Someone in London gets it at 3 PM. Someone in Sydney gets it at 1 AM the next day.

The simple solution—sending at the same time for everyone—isn’t a solution. It’s a compromise that leaves most of your audience sub-optimally timed.

I worked with a global B2B SaaS company that was sending all their emails at 9 AM Eastern. Their US audience opened at 22%. Their European audience opened at 9%. The Europeans were getting emails in the afternoon, when they were already deep in work. The timing was wrong.

We switched to send-time optimization. The ESP sent each email at the recipient’s local 9 AM. European open rates jumped to 18% overnight. The US audience stayed at 22%. The overall open rate increased by 15% with no other changes.

The companies that ignore time zones are leaving engagement on the table. It’s not a small effect. It’s a massive one. And it’s fixable with the right technology.

Science-Based Timing Strategies

Send-Time Optimization (STO): AI-driven delivery

Send-time optimization is the most important advancement in email timing since the invention of the email server. It solves the time zone problem and the individual preference problem in one go.

Here’s how it works. The ESP tracks each subscriber’s open and click behavior over time. It learns when that individual is most likely to engage. Then, when you schedule a campaign, the ESP doesn’t send it at a fixed time. It sends it at each subscriber’s predicted optimal time.

How ESPs calculate individual user time zones

The calculation is based on historical behavior. The system looks at when a subscriber has opened emails in the past. It looks for patterns. If someone consistently opens at 7 AM on weekdays, the system learns that. If someone opens at 9 PM on weekends, the system learns that too.

The more data the system has, the more accurate it becomes. A new subscriber with no history gets sent at the average time for the segment. After a few opens, the system starts to personalize.

I’ve used send-time optimization across multiple ESPs—Klaviyo, HubSpot, Mailchimp’s Send Time Optimization feature. The results are consistent. Lift in open rates of 10-20% compared to fixed-time sends. Sometimes higher.

The caveat is that send-time optimization requires volume. If you’re sending to small segments, the system doesn’t have enough data to predict accurately. For large lists, it’s a no-brainer.

I had a client who was skeptical. They’d been sending at 10 AM for years. They thought their audience was used to it. We ran a test. Half the list got the usual 10 AM send. Half got send-time optimization. The STO segment opened at 31%. The fixed-time segment opened at 23%. The client never sent a fixed-time email again.

Industry-specific timing windows

While there’s no universal best time, there are patterns within industries. These aren’t rules. They’re starting points. You test from here.

B2B: 8 AM – 12 PM, Tuesday-Thursday

B2B audiences behave differently than consumers. They’re checking email before work, during work hours, and sometimes after work. The workday window is the primary engagement period.

I’ve managed B2B accounts across software, professional services, manufacturing, and finance. The pattern is consistent. Peak engagement happens in the early morning, before meetings start, and in the late morning, between meetings. Tuesday through Thursday perform best. Monday is recovery from the weekend. Friday is checkout mode.

The morning window—6 AM to 8 AM local—is often the highest. People check personal email before work. They’re not in meeting mode yet. They have time to read.

The late morning window—10 AM to 11 AM—is second. People have been working for a few hours. They might be taking a break, catching up on email.

Afternoon sends—1 PM to 4 PM—tend to underperform. People are in meetings, heads down on work, or checked out after lunch.

B2C E-commerce: Evenings (6-9 PM) and Weekends

Consumer behavior flips. During work hours, people are busy. They’re not shopping. They’re not browsing. They’re working.

The engagement windows for B2C are outside of work hours. Evenings, after dinner. Weekends, when people have time to browse.

I ran a test for a D2C apparel brand. We sent the same campaign at three different times. Tuesday 10 AM got 14% opens. Thursday 7 PM got 28% opens. Sunday 10 AM got 31% opens. The weekend and evening sends crushed the workday send.

The pattern holds across most consumer categories. People shop when they have time. They have time in the evenings and on weekends. Sending during work hours is fighting for attention against meetings and deadlines. Sending when they’re relaxed is meeting them where they are.

There are exceptions. Newsletters and media brands often perform well in the morning, when people are catching up on the day. Flash sale brands sometimes perform well at odd hours—early morning, late night—when people are scrolling before bed. But for standard e-commerce, evenings and weekends are the primary windows.

Mastering Frequency

The preference center: Giving control back to the user

The single best way to manage frequency is to let your subscribers tell you what they want.

I started building preference centers about ten years ago. At first, they were simple. A checkbox: “Send me weekly updates” or “Send me monthly updates.” The results were immediate. Unsubscribe rates dropped. Engagement increased. People who chose the monthly option opened more of those monthly emails than they had opened the weekly ones.

Options: Daily, Weekly, Monthly, Only Sales

A good preference center offers real choices. Not just “unsubscribe or stay.” Real gradations of frequency.

I use four options for most clients. Daily for the superfans who want everything. Weekly for the engaged majority. Monthly for people who want to stay connected but don’t want frequent emails. Only sales for the deal-seekers who don’t want content.

The data on preference centers is clear. When you give people control, they stay on your list longer. They open more of the emails they do receive. They’re less likely to mark you as spam. They’re more likely to buy.

I had a client who was terrified to add a preference center. They thought people would all choose the lowest frequency. They were wrong. About 40% chose weekly. 30% chose monthly. 20% chose only sales. 10% chose daily. The daily group became their most valuable segment. The only sales group stopped unsubscribing because they were finally getting what they wanted.

The technical implementation is straightforward. Most ESPs have preference center templates. You add a link in your footer. You give people the options. You update your segments to respect those choices. It’s an afternoon of work for a lifetime of better engagement.

How to test frequency (The cadence A/B test)

If you don’t have a preference center, or if you’re trying to find the right baseline frequency, you test.

A frequency test is different from a subject line test. You’re not testing two versions of one send. You’re testing two different sending cadences over time.

The structure I use is simple. Split your active list into two equal segments. Send to Segment A at your current frequency. Send to Segment B at a different frequency. Run the test for four to eight weeks. Measure open rates, click rates, unsubscribe rates, and spam complaints for each segment.

I ran a frequency test for a media company that was sending daily. We tested daily vs. three times per week. The three-times-per-week segment had higher open rates, lower unsubscribes, and the same overall click volume. They were getting the same engagement with fewer sends. That’s a win.

The key is to run the test long enough for fatigue to show up. A week of testing won’t capture the cumulative effect of frequency. You need at least a month. Two months is better.

Signs you are sending too often (unsub spikes, open rate drops)

The signs of over-sending are clear once you know what to look for. The problem is that most marketers don’t look.

The first sign is a spike in unsubscribes. Not a gradual increase. A spike. If your unsubscribe rate jumps from 0.1% to 0.3% on a send, that’s a signal. If it stays elevated across multiple sends, you have a frequency problem.

The second sign is a sustained drop in open rates. Not a one-send dip. A trend. If your open rate has been dropping for four to six weeks and you haven’t changed anything else, you’re probably sending too much.

The third sign is an increase in spam complaints. This is the most serious. If your spam complaint rate starts climbing, you’ve crossed a line. People aren’t just ignoring you. They’re actively trying to stop you.

I had a client who was sending three times a week. Their metrics were stable. They decided to add a fourth send. The first week, open rates held. The second week, a slight dip. The third week, a noticeable drop. The fourth week, unsubscribes spiked. The fifth week, spam complaints doubled.

They had found their ceiling. Three times a week was fine. Four times a week was too many. The audience told them clearly. The problem was they almost didn’t notice because the signals took weeks to fully manifest.

I check frequency metrics every month. Not every campaign. The trends matter more than the individual sends. If I see unsubscribes trending up, I look at frequency first. Most of the time, that’s the culprit.

Consistency builds habit; frequency builds fatigue

There’s a tension here that every email marketer has to manage.

Consistency builds habit. When people know when to expect your email, they’re more likely to open it. A weekly email that arrives every Tuesday at 9 AM becomes a habit. People look for it. They expect it. They open it.

Frequency builds fatigue. The more you send, the more you ask of your audience’s attention. Attention is finite. Every email you send is a withdrawal from that attention account. If you withdraw too often without depositing enough value, the account runs dry.

The best email programs I’ve seen are consistent without being frequent. They send on a predictable schedule. People know when to expect them. But they don’t send so often that people start to tune out.

The Sunday morning newsletter that arrives at 8 AM every week. The Tuesday afternoon product update that comes every two weeks. The monthly digest that rounds up the best content. These cadences build habit without building fatigue.

I’ve seen brands succeed at daily sending. But they’re the exception. They have superfans who want everything. They have content that people genuinely look forward to. Most brands don’t have that. Most brands are better off sending less often and making each send count.

The rhythm of engagement is personal to your audience. Some audiences want to hear from you daily. Some want weekly. Some want monthly. Some want only when there’s a sale. The professionals find that rhythm. They don’t impose a cadence from a blog post. They listen to the data. They watch the unsubscribes. They test. They adjust.

Timing and frequency are not tactics you set once and forget. They’re variables you optimize continuously. Your audience changes. Their habits change. Their attention shifts. The send time that worked last year might not work this year. The frequency that was perfect six months ago might be too much now.

The professionals are always listening. They’re always testing. They’re always adjusting. And they never, ever assume that a generic study knows their audience better than they do.

Moving Beyond the Open: Using CTOR and Conversion as the True North

Introduction – The Ultimate Goal Isn’t an Open

The vanity vs. value metric argument

I was in a quarterly business review a few years ago. The marketing director was presenting. She had slides on list growth. Slides on open rates. Slides on deliverability. The CEO was nodding along. Then she got to revenue from email. The CEO stopped nodding. “Why is this number flat when all these other numbers are up?”

The marketing director didn’t have an answer. She’d been optimizing for opens. She’d been celebrating higher open rates. But the revenue wasn’t moving. Because opens don’t pay the bills.

This is the vanity versus value trap. Vanity metrics feel good. They go up. They make you look competent in presentations. But they don’t correlate to business outcomes. Value metrics are harder. They’re messier. They’re harder to move. But they’re the ones that actually matter.

I’ve seen marketers spend months optimizing subject lines to lift open rates by five percentage points. The open rate went up. The revenue stayed flat. Because the people who were opening weren’t clicking. They were opening, glancing, and deleting. The subject line got them in the door. The content failed to close the deal.

The shift I’ve made in my own practice is simple. I don’t care about open rates as a success metric. I care about them as a diagnostic. They tell me if my deliverability is working. They tell me if my subject lines are doing their job. But they don’t tell me if my email program is actually generating value.

The value metrics are downstream. Clicks. Conversions. Revenue. Customer lifetime value. These are the numbers that matter. These are the numbers that move the business.

Why opens don’t pay the bills

Let’s be clear about what an open actually is. An open means someone’s email client loaded a tracking pixel. That’s it. It doesn’t mean they read the email. It doesn’t mean they understood the offer. It doesn’t mean they took any action. It means a pixel fired.

Before Apple’s Mail Privacy Protection, an open was at least a reasonable proxy for attention. Not perfect, but reasonable. Now, with MPP, an open can happen without the person ever seeing your email. The metric is broken. But even before MPP, the fundamental problem was the same. Opens don’t generate revenue. Clicks generate revenue.

I had a client who was obsessed with open rates. They’d run subject line tests constantly. They’d celebrate wins of one or two percentage points. Their open rate was 28%. Industry-leading. But their click rate was 1.2%. Their revenue from email was anemic.

We stopped optimizing for opens. We started optimizing for clicks. We changed the content. We changed the offers. We changed the layout. The open rate dropped to 22%. The client panicked. But the click rate went to 3.8%. Revenue from email doubled.

The 28% open rate was a lie. It was high because the subject lines were good at getting people to open, but the content was bad at getting people to act. People opened, saw nothing compelling, and left. The high open rate was masking a broken email program.

The moment you start measuring success by opens, you optimize for opens. You write clickbait subject lines. You create curiosity gaps that don’t deliver. You get the open, then you lose the reader. The business suffers. But your dashboard looks great.

The professionals measure what matters. Opens are directional. Clicks and conversions are value.

Defining the Advanced Metrics

Click-Through Rate (CTR): The action taker

CTR is the percentage of people who received your email and clicked a link. It’s the first value metric in the chain. Someone opened, read, and took action.

How to calculate CTR

The formula is simple. Unique clicks divided by delivered emails, times one hundred.

If you send to 10,000 people and 300 click, your CTR is 3%.

I look at CTR before I look at open rates. It tells me more about the quality of the email. A high open rate with a low CTR tells me the subject line is doing its job but the content is failing. A low open rate with a high CTR tells me the content is good but the subject line isn’t getting people in the door.

CTR is where most email programs live or die. It’s the bridge between attention and action. If you can’t get clicks, you can’t get conversions. If you can’t get conversions, you can’t get revenue.

I had a client whose CTR was stuck at 0.8%. They were convinced their audience wasn’t interested. We ran a series of tests on content and CTA placement. Six weeks later, CTR was 2.4%. Same audience. Same offers. Different approach. The audience was interested. The emails just weren’t giving them a clear reason to click.

CTR is not a vanity metric. It’s a value metric. It correlates with revenue. Move CTR, move revenue.

Click-to-Open Rate (CTOR): The quality score

CTOR is CTR with the denominator adjusted for opens. It’s clicks divided by unique opens. It tells you, of the people who opened your email, what percentage clicked.

How to calculate CTOR

Unique clicks divided by unique opens, times one hundred.

If 1,000 people open your email and 100 click, your CTOR is 10%.

Why CTOR is the best measure of content relevance

CTOR is the purest measure of content quality. It removes deliverability from the equation. It removes subject lines from the equation. It isolates the content.

When I look at a campaign, I look at CTOR first. It tells me whether the content did its job. If CTOR is high, the content resonated with the people who saw it. If CTOR is low, something is wrong with the content, the offer, or the relevance to the audience.

I’ve seen CTOR range from 5% to 25% across different campaigns and clients. The high end is exceptional. The low end is broken.

A low CTOR can mean several things. The content isn’t relevant to the audience. The offer isn’t compelling. The call to action is unclear. The design is confusing. The email is too long. The email is too short. The CTOR tells you there’s a problem. The specifics of what that problem is require investigation.

I had a client with a CTOR of 4%. People were opening, but almost no one was clicking. We tested five different versions of the email. Subject lines varied, but the CTOR stayed low. The problem was the content. It was all about the company. Features, announcements, internal news. No one cared. We shifted to content about the customer. Problems, solutions, case studies. CTOR jumped to 14% on the next send.

CTOR is the canary in the coal mine. When it drops, your content is failing. When it rises, you’re delivering value.

Conversion Rate & ROI: The bottom line

Clicks are great. But clicks don’t pay the bills either. Conversions pay the bills.

Conversion rate is the percentage of people who completed a desired action. Purchase. Signup. Download. Demo request. Whatever the goal of the email is.

The conversion rate is the ultimate value metric. Everything before it is intermediate. Opens lead to clicks. Clicks lead to conversions. Conversions lead to revenue.

I’ve worked with clients who optimized for clicks. They’d celebrate high CTRs. But the conversion rate from those clicks was low. They were getting people to the site, but not getting them to buy. The problem was the landing page, not the email. But they couldn’t see that because they stopped measuring at the click.

The chain is only as strong as the weakest link. You need to measure all of them. Open rate. CTR. CTOR. Conversion rate. Revenue per email. Return on investment.

Revenue per email is the number I care about most. It’s total revenue from email divided by total emails sent. It tells you, on average, how much money each email generates. That’s the business metric. That’s what the CEO cares about.

ROI is revenue minus cost divided by cost. If you spend $1,000 on email platform costs and generate $10,000 in revenue, your ROI is 900%. That’s a number that matters.

Open rates don’t show up in the P&L. CTR doesn’t show up in the P&L. CTOR doesn’t show up in the P&L. Conversions and revenue show up in the P&L.

How to Optimize for CTOR

Clarity of offer above the fold

The most common reason for low CTOR is that people open the email and don’t immediately understand what they’re supposed to do.

Above the fold matters in email just like it matters on a website. The top of the email, visible without scrolling, needs to communicate three things. What is this email about? Why should I care? What do you want me to do?

If any of those is unclear, people bounce. They opened, they glanced, they left.

I’ve audited hundreds of emails with low CTOR. The pattern is consistent. The subject line promised one thing. The top of the email delivered something else. The reader felt misled. They didn’t click.

The fix is brutal simplicity. The headline above the fold should directly connect to the subject line. The offer should be clear. The call to action should be obvious.

I had a client whose emails had beautiful design. Hero images. Layers of text. Subtle branding. Their CTOR was 6%. We stripped it down. One headline. One sentence of context. One button. CTOR went to 15%. The design wasn’t helping. It was getting in the way.

Above the fold is prime real estate. Use it to make the offer clear. Save the nuance for below the fold.

Single CTA vs. Multiple Links strategy

This is a perennial debate in email marketing. One CTA or many?

The answer depends on the goal of the email.

If the email has a single goal—buy this product, register for this webinar, read this article—use a single CTA. Every link that’s not the primary CTA is a distraction. Every distraction reduces conversion.

If the email is a newsletter or digest with multiple pieces of content, multiple links are appropriate. The goal is engagement, not a single conversion. Readers should be able to click on what interests them.

I’ve run tests on this across dozens of clients. For promotional emails, a single CTA almost always outperforms multiple links. For content emails, multiple links perform better.

The mistake I see most often is promotional emails with multiple CTAs. “Shop now. Learn more. Read our blog. Follow us on Instagram.” The reader doesn’t know what you want them to do. So they do nothing.

If you want them to buy, make the only clickable thing a button that says “Buy Now.” Every other link is noise. Remove it.

Visual hierarchy and button placement

Where the CTA appears matters as much as what it says.

Above the fold is best. If the reader has to scroll to find the button, you’ve lost the ones who opened and glanced.

Button design matters too. Color contrast. Size. White space around it. The button should look like a button. It should be the most prominent element on the screen.

I’ve seen emails where the CTA was a text link at the bottom of a long paragraph. In the same font as the rest of the text. The reader couldn’t find it. CTOR suffered.

The best practice is simple. A button. High contrast color. Above the fold. One per email if the goal is a single action.

I tested button placement for a client. Version A had the button at the top, right after the headline. Version B had the button at the bottom, after three paragraphs of copy. Version A had a CTOR of 18%. Version B had a CTOR of 9%. Same content. Same button. Different placement.

The people who want to click don’t need three paragraphs of persuasion. Give them the button early. Let the copy support the decision for those who need it.


Attribution: Connecting Email to Revenue

Setting up UTM parameters correctly

If you’re not using UTM parameters, you don’t know how much revenue your email is generating. You’re guessing.

UTM parameters are tags you add to the links in your email. They tell Google Analytics where the traffic came from. Source, medium, campaign, content, term.

For email, I use a consistent structure. Source is the ESP name. Medium is email. Campaign is the name of the campaign. Content is the specific link or CTA.

A UTM for an email might look like this:
?utm_source=klaviyo&utm_medium=email&utm_campaign=fall_sale&utm_content=hero_button

Without these tags, traffic from email shows up in Google Analytics as “direct” or “referral.” You can’t tell which email drove it. You can’t measure performance. You can’t optimize.

I’ve taken over accounts where no UTMs were set up. The marketing team had no idea which campaigns were driving revenue. They were optimizing based on opens and clicks. They were flying blind.

Setting up UTMs takes five minutes. Most ESPs have a feature that appends them automatically. If yours doesn’t, you can add them manually. The effort is trivial. The insight is invaluable.

Multi-touch attribution vs. last-click attribution

The simplest way to attribute revenue to email is last-click attribution. The last channel someone clicked before converting gets the credit. It’s easy. It’s what most analytics platforms show by default.

It’s also wrong.

People don’t buy on the first touch. They see an email. They don’t click. They see a retargeting ad. They don’t click. They see a social post. They click. They don’t buy. They see another email. They click. They buy.

Last-click attribution gives all the credit to the last email. It ignores the other touches that built the intent.

Multi-touch attribution distributes credit across all the touches in the customer journey. It’s more accurate. It’s also more complex to set up.

For most businesses, the shift from last-click to multi-touch changes the perceived value of email. Under last-click, email looks good. Under multi-touch, email looks even better, because it often plays an early role in the journey, building intent that converts later.

I had a client who thought their email program was underperforming. Last-click attribution showed email driving 12% of revenue. We implemented multi-touch attribution. Email’s contribution jumped to 28%. The email program was doing more work than they realized. People were opening emails, not clicking, then searching for the brand later and buying. The email didn’t get the last click, but it started the journey.

Attribution is complex. But ignoring it is worse. You can’t optimize what you don’t measure. And if you’re measuring incorrectly, you’re optimizing incorrectly.

Use opens to measure deliverability, use conversions to measure success

This is the framework I’ve settled on after years of managing email programs.

Opens are for deliverability. If your open rate drops, something is wrong with your deliverability or your subject lines. That’s a technical problem. It needs investigation. But it’s not a measure of business success.

CTR and CTOR are for engagement. They tell you if your content is resonating with the people who see it. They’re the quality metrics.

Conversions and revenue are for business success. They’re the numbers that matter to the bottom line. They’re the numbers that get you promoted. They’re the numbers that justify the budget.

I’ve seen too many marketers get stuck at opens. They celebrate open rate wins. They ignore flat revenue. They’re optimizing for the wrong thing.

The shift in mindset is simple. Ask yourself: does this metric correlate with revenue? If yes, optimize for it. If no, use it as a diagnostic but don’t let it drive your strategy.

Open rates correlate weakly with revenue. CTR correlates more strongly. Conversion correlates directly. ROI is revenue.

The professionals measure all of them. They use opens to monitor deliverability. They use CTR and CTOR to optimize content. They use conversions and ROI to measure success. And they never, ever mistake a high open rate for a healthy email program.

I’ve seen email programs with 40% open rates and flat revenue. I’ve seen email programs with 15% open rates and growing revenue. The open rate didn’t tell the story. The revenue told the story.

Stop celebrating opens. Start celebrating revenue. The business will thank you. Your career will thank you. And you’ll stop wasting time optimizing metrics that don’t matter.

Future-Proofing Your Email Strategy: AI, Interactivity, and The Cookieless Future

Introduction – The Next Generation of the Inbox

How the open rate will continue to evolve

I’ve been doing email long enough to watch the open rate go from the unquestioned king of metrics to a broken, distrusted shadow of its former self. Apple’s Mail Privacy Protection was the earthquake. But the aftershocks are still coming.

Google has been testing similar privacy features in Gmail. Not the same as MPP—yet—but the direction is clear. Email providers are moving toward hiding engagement data from senders. They see it as a privacy feature. They’re right. And they’re going to keep pushing.

I tell my clients now that the open rate as we knew it is on borrowed time. Within the next few years, I expect most major email providers to implement some form of privacy protection that makes open tracking unreliable or impossible. The metric won’t disappear overnight. But it will become so noisy that using it for decision-making will be foolish.

The professionals are already adapting. They’re shifting their reporting away from opens. They’re building systems that don’t rely on open data for segmentation. They’re training stakeholders to look at clicks, conversions, and revenue instead.

If your email program still depends on opens for segmentation—”if not opened in 30 days, move to re-engagement”—you’re building on sand. That logic will break. The future is behavioral segmentation based on clicks, purchases, and other explicit actions. Things the user actually does, not things a pixel does for them.

The shift toward first-party data

The broader trend in digital marketing is the collapse of third-party data. Cookies are dying. Tracking is becoming harder. Privacy regulations are tightening. And the one channel that remains largely unaffected is email.

Email is first-party data. Someone gave you their address. They told you they wanted to hear from you. That’s a relationship, not a tracked impression. In a world where third-party data is becoming unusable, email becomes more valuable, not less.

I’ve watched this shift play out over the last five years. Brands that invested in email ten years ago are now sitting on assets their competitors can’t replicate. A list of a hundred thousand engaged subscribers is worth more than a million-dollar ad budget. Because the ad budget buys impressions that disappear. The email list buys attention that you own.

The shift toward first-party data means email is no longer just a channel. It’s the central hub of your marketing strategy. Everything else—ads, social, content—should drive to email. Because email is the one place where you control the relationship. The platforms can change their algorithms. They can kill organic reach. They can raise ad prices. They can’t touch your list.

The brands that understand this are building their entire marketing infrastructure around email. They’re using ads to capture email addresses. They’re using content to nurture subscribers. They’re using email to drive repeat purchases. The funnel starts and ends with the list.

Artificial Intelligence in Email Marketing

AI-generated subject lines (Prompt engineering for better results)

The AI tools that emerged in the last few years have changed how I write subject lines. Not because AI is better than a human writer. Because AI is better at generating volume and spotting patterns.

I use AI for subject line ideation now. I’ll give it a prompt: “Generate 20 subject lines for an email promoting a 20% off sale on winter coats. Target audience is outdoor enthusiasts in cold climates.” The AI gives me twenty options in ten seconds. Some are terrible. Some are decent. A few are genuinely good.

The value isn’t that AI writes the subject line for me. The value is that I see patterns I might not have considered. The AI suggests angles I wouldn’t have thought of. I take those, refine them, and test them against my own ideas.

The key is prompt engineering. The quality of what you get from AI depends entirely on the quality of what you give it. Vague prompts get vague results. Specific prompts with context, audience details, and brand voice guidelines get usable results.

I’ve run tests comparing purely human subject lines, purely AI subject lines, and hybrid subject lines. The hybrids win consistently. AI provides the volume and pattern recognition. The human provides the brand voice and strategic context. Together, they outperform either alone.

The mistake I see is treating AI as a replacement. Marketers who let AI write their subject lines without oversight end up with generic, forgettable copy. The AI doesn’t know your brand voice. It doesn’t know what’s worked historically. It doesn’t know the nuance of your audience. You have to guide it.

Predictive segmentation (AI deciding who gets what)

This is where AI is genuinely transformative. Not writing copy. Deciding who gets which copy.

Predictive segmentation uses machine learning to analyze customer behavior and predict future actions. It identifies patterns humans can’t see. It builds segments based on propensity to buy, likelihood to churn, preferred content types.

I worked with a client where we implemented predictive segmentation for their email program. The AI analyzed past purchase behavior, browsing activity, and engagement patterns. It built micro-segments based on predicted interests. Customers who were predicted to be interested in hiking gear got hiking emails. Customers predicted to be interested in camping got camping emails.

The results were significant. Open rates increased by 18%. Click rates increased by 34%. Revenue per email increased by 27%. The AI was better at predicting what people wanted than the marketing team was, because the AI was analyzing thousands of data points across hundreds of thousands of customers.

The downside is complexity. Predictive segmentation requires data infrastructure most small businesses don’t have. You need a customer data platform or an ESP with advanced machine learning capabilities. You need clean data. You need volume. For large senders, it’s worth it. For smaller senders, the ROI isn’t there yet.

The human oversight factor: AI as a tool, not a replacement

The most important thing I’ve learned about AI in email is that it’s a tool. A powerful tool. But still a tool.

AI can generate subject lines. It can predict segments. It can optimize send times. It cannot understand your brand. It cannot know your customers the way you do. It cannot make strategic decisions about positioning, messaging, or long-term relationship building.

The human oversight factor is non-negotiable. Every AI-generated subject line needs a human review. Every predictive segment needs a human sanity check. Every AI-optimized send time needs a human to ask “does this make sense for our audience?”

I’ve seen brands implement AI tools and assume they could set it and forget it. They stopped reviewing. They stopped testing. The results degraded over time. The AI was optimizing for short-term engagement metrics while missing the long-term brand building that human strategists prioritize.

The future of email is human plus AI. Not human replaced by AI. The humans who thrive will be the ones who learn to use AI as a force multiplier. They’ll generate more ideas, test more variations, analyze more data. But they’ll still make the final calls. They’ll still own the strategy. They’ll still be accountable for the results.

Interactive Email (AMP for Email)

What is AMP?

AMP for Email is Google’s project to make emails interactive. It’s been around for a few years, but adoption is still limited. That’s starting to change.

AMP stands for Accelerated Mobile Pages. In email, it allows you to embed interactive elements that work inside the email. The user doesn’t need to click through to a website. The action happens in the inbox.

Forms inside the inbox

With AMP, you can put a form directly in the email. Someone can update their preferences, RSVP to an event, fill out a survey, or even make a purchase without leaving their email client.

I saw a travel brand implement AMP for a flight booking confirmation. The email included a form to select seats. Customers could pick their seats directly in the email. No click-through. No loading screen. The conversion rate on seat selection increased by 40% because the friction was removed.

Carousels and live content

AMP also enables carousels—image galleries that users can scroll through without leaving the email. Live content that updates in real time. Product recommendations that refresh based on current inventory.

A retailer I worked with tested AMP carousels in their promotional emails. Instead of a single hero image, they embedded a carousel of six products. Engagement increased. Click rates went up. People spent more time interacting with the email.

The limitation is support. AMP for Email only works in certain email clients. Gmail supports it. Yahoo supports it. Outlook on the web supports it. Outlook desktop and Apple Mail do not. That means you’re building for a subset of your audience. For now, it’s progressive enhancement. The interactive features work for supported clients. Everyone else gets a fallback.

How interactivity changes the game

From “open to read” to “open to engage”

This is the fundamental shift. Traditional email is a document. You open it, you read it, you click a link, you leave. Interactive email is an application. You open it, you engage with it, you take action inside it.

The implications for metrics are significant. When someone interacts with an AMP carousel, that’s engagement that doesn’t show up as a click. It shows up as an interaction. New metrics. New ways to measure attention.

I expect interactivity to become standard over the next five years. The technology is there. The support is growing. The user behavior is shifting. People are used to interactive experiences everywhere else. Email is the last holdout.

The brands that adopt early will have an advantage. They’ll capture attention in ways competitors can’t. They’ll reduce friction in the conversion path. They’ll build more engaging experiences.

The barrier is development. AMP for Email requires more technical work than standard HTML emails. You need developers who understand the spec. You need to test across clients. It’s not plug-and-play. For brands with the resources, it’s worth it.

The Cookieless Future and Email’s Rise

The death of third-party cookies

Third-party cookies are going away. Google has been delaying the phase-out, but the direction is clear. The days of tracking users across the web with third-party data are ending.

For marketers who built their strategies on retargeting ads, this is a crisis. For marketers who built on email, this is an opportunity.

Third-party cookies allowed you to follow people around the web. They allowed you to build audiences based on browsing behavior across sites. That’s going away. The platforms are closing the loop.

First-party data—the data you collect directly from your customers—becomes more valuable when third-party data disappears. And email is the primary vehicle for first-party data collection and activation.

I’ve been telling clients for years to treat their email list as their most important marketing asset. That advice is more urgent now. The brands that own their audience relationships will win. The brands that rent audience attention from platforms will struggle.

Why email is now the most valuable marketing asset

Let me be clear about what I mean by valuable.

An email list is an asset you own. You control it. You control the relationship. You control the messaging. No algorithm can deprioritize you. No platform can raise your rates. No policy change can take it away.

Compare that to social media. You don’t own your followers. You’re renting them from Facebook, Instagram, TikTok. The platform can change the algorithm and your reach disappears. They can ban your account. They can raise ad prices. You have no control.

Compare that to search. You don’t own your rankings. Google can update their algorithm and your traffic disappears. You’re at their mercy.

Email is different. Your list is yours. The only thing between you and your subscribers is spam filters. And spam filters are governed by your behavior, not by a platform’s business model.

I’ve seen brands that built million-follower Instagram accounts lose everything when the algorithm changed. I’ve never seen a brand lose their email list to an algorithm change.

The value of email is only going to increase as other channels become less reliable. The brands that are investing in list growth, engagement, and first-party data now will be the ones that thrive in the cookieless future.

Preparing for stricter privacy regulations (GDPR, CCPA, etc.)

The privacy trend isn’t just technological. It’s regulatory. GDPR in Europe. CCPA in California. More states are passing similar laws. More countries will follow.

These regulations all have one thing in common. They make it harder to collect and use data without explicit consent. They require transparency. They give users the right to be forgotten.

For email marketers who are doing things right, these regulations aren’t a problem. They require explicit opt-in. They require clear privacy policies. They require honoring unsubscribe requests promptly. That’s all good practice anyway.

For marketers who are buying lists, using shady acquisition tactics, or ignoring unsubscribes, these regulations are a threat. They create liability. They create fines. They create reputational damage.

The future is consent-based marketing. People choose to hear from you. They can choose to stop. You respect that. The relationship is transparent.

The brands that embrace this will build trust. The brands that resist will lose subscribers, face fines, and damage their reputation. The choice is clear.

Conclusion – The Constant is Change

Recap of the 10 pillars to sustained email health

I’ve walked through ten things that matter in email marketing. The myth of the universal open rate. Apple’s MPP. Subject line psychology. The preheader. Deliverability. Segmentation. List hygiene. Timing and frequency. Advanced metrics. And the future.

If I had to distill it all down to what actually works, it’s this. Build a list of people who want to hear from you. Send them things they actually care about. Respect their attention. Measure what matters. Keep learning.

The tactics will change. The platforms will change. The metrics will change. The fundamentals won’t. Relevance, respect, value. Those are the constants.

Final thoughts: Focus on value, and the metrics will follow

I’ve been doing this long enough to see the cycles. A new channel emerges. Everyone rushes to it. They declare email dead. Then they realize the new channel doesn’t deliver the same ROI. They come back to email. The cycle repeats.

Email isn’t going anywhere. It’s the oldest digital marketing channel for a reason. It works. It’s reliable. It’s owned.

The future of email is brighter than it’s ever been. AI will make it smarter. Interactivity will make it more engaging. The death of third-party cookies will make it more valuable. The professionals who adapt will thrive.

The ones who get stuck in old ways—obsessing over open rates, sending generic blasts, ignoring deliverability—will fall behind. The gap between the best email programs and the average ones will widen.

My advice is simple. Focus on value. Value to your subscribers. Value to your business. The metrics that matter will follow. The rest is noise.

I’ve written thousands of emails. I’ve managed lists from a thousand subscribers to a million. I’ve seen what works and what doesn’t. The principles are the same regardless of scale. Know your audience. Respect their attention. Give them something worth opening. Give them something worth clicking. Give them something worth buying.

Do that consistently. The rest is details. Important details, sure. But details nonetheless. The foundation is value. Build on that, and everything else falls into place.