Home > NewsRelease > Why Open Rate isn’t a Good Key Performance Indicator for Subject Line Tests
Text
Why Open Rate isn’t a Good Key Performance Indicator for Subject Line Tests
From:
Jeanne S. Jennings -- Author - The Email Marketing Kit Jeanne S. Jennings -- Author - The Email Marketing Kit
For Immediate Release:
Dateline: Washington, DC
Friday, July 7, 2023

 

“We’re testing a subject line, so we’ll use open rate as our KPI, since the subject line impacts the open rate. Whichever version gets the highest open rate will also likely have the highest conversion rate or revenue-per-email, right?”

As an email marketing consultant and trainer, I hear this a lot.

I get it. It takes time, budget, and resources to integrate Google Analytics or another platform that’s measuring your conversions and revenue with your email marketing platform and/or dashboard. It’s costly.

But the cost of not doing it is probably greater.

Case in point: A subject line test I did with a client a while back, as part of a new B2C service launch campaign.

The open rate results appear in the table below, along with the lift/loss for the test versions, using the control as a baseline.

Based on this data, which version would you say performed best to win the test?

 

If this is all you have to work with, you’re going to declare test version A the winner, since it had the highest open rate. Test version A generated a 1.5% increase in open rate over the control.

Now, the sample size was large enough that this is a statistically significant result. But let’s say that it wasn’t statistically significant. In that case you’d go with the control as your winner.

Open rate is a diagnostic metric; it gives you valuable information about how people engaged with your email, but it doesn’t directly measure bottom line results. Business metrics speak to bottom line results; a business metric measures the action(s) caused by the email that your business needs to survive.

In this instance, the client is looking for email recipients to sign-up for this new service which is free, so their business metric is CR from Sent, which stands for conversion rate (here, number of sign-ups for the new service) from the quantity sent. Since there’s no financial component to the transaction we don’t have a use for Revenue-per-email-sent (RPE), another common business metric.

Take another look at the table with the conversion rate metric added — is test version A still the winner?

The answer is no. Test version B is the winner.

Why is Test Version B the winner?

Because it generated a 25% lift in conversion rate from sent over the control. 2.0% of those who received the test B email converted, they signed-up for the new service. Only 1.6% of those who received the control email did that.

And yes, this result is statistically significant.

Test version A wasn’t a slouch; it bested the control version in conversion rate by 12.5%. But it was a smaller lift over the control than test version B provided.

Why did we use conversion rate to determine the winner? Because it’s our key performance indicator (KPI). As with most companies that offer free services, we are looking to bring in as many new users as possible from each email we send. So, we divide the total number of conversions we received from each version by the number of email addresses that version was sent to, which is conversion rate from sent, and compare them.

“But how often is that true? Maybe that case study was an anomaly. I bet that the subject line with the highest open rate usually also has the highest RPME or CR from Sent.”

I hear you, but…

This case study was, indeed, based on a single send.

But looking at subject line tests that I’ve done over the years with a variety of clients, here’s what I found:

If we had used open rate as our KPI, we would have been right just 20% of the time. 10% of the time we would have been wrong, declaring the subject line with the highest open rate the winner, when in reality it generated fewer conversions or less revenue than the lower open rate version.

And 70% of the time we would have declared there to be no difference between the subject lines, even though there was actually a statistically significant variance in the RPE or CR from sent.

This happens more than most people imagine; open rate tests often don’t show statically significant variances, even though the business metrics, RPE or CR from sent, are, in fact, statistically significant. Substantially so. This has happened with just about every client I’ve worked with during my 20+ years of consulting.

One more note. When you optimize a bottom-line metric, whether it’s RPE or CR from sent, it’s not just for the test that you’re currently doing.

The winner of a test should become the new control, so the next time you’re doing a similar campaign that’s where you start.

In the case of the example above, we sent the winning creative (Test Version B) a few more times during the launch campaign. By using CR from sent as our KPI, instead of open rate, we got the most sign-ups per email sent out of each of those campaigns as well.

Make sure you always use the best KPI for your email campaigns. This should always be a business, not a diagnostic, metric.

Find this interesting? Check out my article on why click-through rate is not a good KPI.

Pickup Short URL to Share
News Media Interview Contact
Name: Jeanne S. Jennings
Title: Author, The Email Marketing Kit
Dateline: Washington, DC United States
Direct Phone: 202-333-3245
Cell Phone: 202-365-0423
Jump To Jeanne S. Jennings -- Author - The Email Marketing Kit Jump To Jeanne S. Jennings -- Author - The Email Marketing Kit
Contact Click to Contact