Case Study: How to Increase Email Clicks by 70%+

SHARE THIS GUIDE:

We’ll start with a somewhat controversial statement when it comes to email.

Your click rate is vastly more important than your open rate.

We’re talking like 10x more important.

The reason this might be met with a raised eyebrow by many? Most authors use open rate as their primary method for evaluating whether an email was successful or not. This isn’t limited to authors; open rate tends to be the focal point for most business owners.

Yet open rate doesn’t matter as much as everyone believes. For one, privacy changes like the ones introduced in iOS15 have inflated open rates by 5 – 10% since Fall 2021. So that makes it a much less accurate reflection of actual opens than it was previously.

But open rate was always overrated, even before these changes. Yes, you need people to open your emails. If your open rate is catastrophically low (sub-20%), then you’d want to examine why that’s the case and take measures to address it.

However, getting subscribers to click on your emails is far more critical. Because how do you send them to Amazon (or another retailer) to get them to buy your books? You get them to click on a link in the email. If they’re not clicking, then they’re probably not buying.

So if people aren’t clicking on your links, then even a world-beating open rate won’t make your newsletter a useful asset.

(Yes, I realize that a handful of people will search for the book on their retailer of choice rather than clicking the link in the email, but these are exceptions. And the takeaway from this case study will help improve the rate of people doing that, too)

Thus, in this 4,500+ word case study, I break down:

  • The data that told me something was off with the click rate in a recent email
  • My 4 step process for analyzing data
  • How you can potentially increase your click rates 70%+ with one simple adjustment
  • Why data compounds in value over time

Let’s get to it, then, starting with how I knew something was off.

Note: this case study analyzes email data from my non-fiction newsletter, but the takeaways and analysis process are 100% applicable to fiction.

THE BACKSTORY: SOMETHING IS AMISS…

In January 2023, I sent out the 2023 edition of the Book Marketing Crash Course to my non-fiction email newsletter subscribers. This was actually the 6th version of the crash course, which meant that I’ve sent 6 emails over the past 4 years announcing the new version’s publication.

And the 2023 version generated a 10% click rate.

Now, having sent out this particular crash course multiple times before, something immediately stood out to me: that click rate was lower than past years. I knew that before I even checked the numbers, since I analyze my data periodically.

But when I did check the actual numbers, they revealed an even starker difference than expected: just one year before (2022), the crash course announcement email had gotten a 17% click rate:

A note: the click rates in the data (and all data that follows in this case study) includes clicks on all the links in the email.

So what caused the massive drop from 17% to 10% in just one year?

I figured it out. I’ll talk about that at the end. But more important than the actual answer is how I identified what was wrong. Because that will allow you to generate insights from your own data.

So let’s break down the analysis process I used.

GETTING ANSWERS: THE 4 STEP ANALYSIS PROCESS

Before we begin it should be noted that, in the real world, these analysis steps blur together. It’s not a linear process. What I’ve outlined here is a general 4 step framework that’s proven useful for me:

  1. I look at the initial data set and assess whether the results are around what I’d expect.
  2. If the data differs from my expectations, I form an initial theory about why the performance is significantly better or worse than expected.
  3. I look for qualitative information and additional data for context to determine whether the theory makes sense or not.
  4. The final step (which can be repeated many times before actually being final) is I refine my theory based on this additional data / information OR I go back to Step 2 and form a new theory. If the latter, then I repeat Steps 2, 3, and 4 until I find the answer or decide I can’t draw any reasonable conclusions (e.g., the answer is currently unknowable).

The biggest stumbling block is that most people do the first two steps…then stop. This is because they’re trying to prove their initial theory right and subsequently perform only cursory investigation.

This is not effective for finding useful patterns and information hidden within the data.

We’re not looking to prove our initial theory right. We’re looking for the actual truth.

And that often requires digging pretty deep.

Let’s walk through this step-by-step to figure out what went awry with the 2023 Crash Course email, then.

STEP 1: ASSESSING THE RESULTS

The first thing is just to look at the initial data set and assess whether the results are around what I’d expect.

If the metrics look fine, I rarely investigate further. You can’t do a deep dive analysis into every piece of data, nor would you want to. If something falls in the middle range of meh / decent / good, then it’s probably not worth spending additional time on.

If something is a particularly good or unexpectedly bad performer, however, it’s valuable to reverse-engineer why that’s the case.

And if it doesn’t really matter that the performance is good / bad, then I probably won’t investigate further, either. Because optimizing things that don’t matter is not a big leverage point that will move the business forward. (Although doing so can still yield valuable insights that might prove useful elsewhere.)

As for how you form realistic expectations in the first place? The primary way is simply going through this process over and over. Through data analysis, you’ll generate benchmarks and build an intuitive understanding about where certain metrics should fall. Trusted sources of data are also invaluable for reference, but a word of caution here: most data you find (whether related to books or at large) is incorrect. It’s exceedingly rare to come across clean, well-analyzed data. My general rule here is that I take everything with a grain of salt unless I see the actual raw data (e.g., the sales dashboard, the ad account etc.) with my own two eyes.

What happened: the 2023 email got 53% opens and 10% clicks. The open rate aligns with recent averages (45 – 50%). But the click rate is much lower (10% v. 13 – 19%) than expected based on past data.

So I decided to investigate further. Which is where the analysis begins.

INTERLUDE: ACTION EXERCISE

Before I dive into what I did, I want to briefly interrupt our case study with an action exercise that will help you actually apply this process. Because while reading and acquiring information has some benefit, the true dividends come through practice.

So why not see if you can figure out why the click rate dropped from 17% in 2022 to 10% in 2023 before you read any further? Naturally, you’re at a substantial disadvantage here, since I have access to all my data over the years and know the content of each email far better (since I wrote them).

But the data provided below should be enough to come up with potential solutions to the riddle. And the challenge will help you sharpen your analysis much more than something easy.

So here’s the data set for the six times the Marketing Crash Course was announced to my email list, along with some additional data from two times when I announced a guide to getting a BookBub deal:

These 8 emails all followed the same general format: an email to my non-fiction list with a link to the respective guide.

Additional qualitative information:

  1. The Crash Course had previously been released as an email course and a permafree book on Amazon prior to the 2019 email.
  2. The 2019 Marketing Crash Course’s impending release had been mentioned in the email directly before “Here’s the 2019 Book Marketing Crash Course,” so people were potentially looking forward to its release.
  3. 3 out of 6 emails announcing the Crash Course also sold a product (they’re listed above).
  4. The first BookBub Deals guide email above was the first time that piece of content had ever been sent out (e.g., that guide was brand new).

So, if you want, set a timer for 15 minutes and walk through the process outlined above (form a theory explaining why the 2023 Crash Course email’s click rate dropped, examine the additional data, and then refine your theory).

And then keep reading to see if your conclusions match mine.

Timer hit zero? Cool. Let’s keep going.

STEP 2: FORM AN INITIAL THEORY

If the data differs from my expectations, the next thing I do is form an initial theory about why the performance is significantly better or worse than expected.

This theory often comes in the form of a question. Framing things in this manner prevents me from becoming too attached to a certain explanation. It is very easy to spend the entirety of your analysis time trying to prove yourself right.

That’s not what you’re doing here, though. You’re looking for the truth. Even if that truth is painful and forces you to recalibrate major aspects of your email list management, ad strategy, or whatever it is you’re currently analyzing.

Often I’ll come up with multiple theories. This also prevents me from becoming too convinced that one of them is right before I get the chance to examine the data more closely.

What happened: my initial theories here were that the low click rate in 2023 was caused by either (1) content fatigue (having released the Crash Course multiple times previously, a bunch of people on the list would have seen it already) or (2) I had too many links in the 2023 email, thus splitting focus.

STEP 3: LOOK AT MORE DATA

After I form a theory (or theories), I look for qualitative information and additional data for context to determine whether the theory makes sense or not. I start with things that are directly / closely related and then expand to other situations that are similar but not exactly the same for additional context. This allows me to spot patterns that are universal rather than isolated to a very specific situation.

I’m not just looking at data here. I’m also looking for qualitative information here (e.g., the actual content of the email, or the ad, or whatever else I’m analyzing). If analysis and decision making was just about numbers, then computers would have already automated everything and SkyNet would have rendered us all about obsolete.

The data is just one of our tools in the quest for answers. The other is reading the emails or ads etc. and looking for things that might be impacting the numbers.

What happened: the first thing I usually do is look for obvious mistakes or things that were unclear.

Essentially, did I execute correctly according to my current known best practices?

People often think they have a massive problem when they simply forgot to include the link in their ad or email.

So I generally start my analysis by double-checking the basics:

  • Checking whether I actually linked something (yes)
  • Checking to make sure I emailed the right list (yes)
  • Making sure the data is relatively stable by waiting a few days for additional clicks to come in (yes)
  • Skimming or re-reading the email that generated the lower / higher than expected results to see if there was anything obviously different or confusing (no)

This saves you time. You want to avoid diving down a four hour analysis rabbit hole only to discover that the reason your Facebook Ad sucked was because you chose the wrong ad objective. Most of the problems I see when data looks bad is simply because the author in question (myself included) made a basic mistake in the foundational setup.

That’s not a big deal. It happens. You correct it and move on.

Sometimes things aren’t explained by an obvious error, though, and demand further inquiry.

So next, I’d start looking at the most relevant data. This is where having said data easily accessible is incredibly valuable. If all the data is buried or siloed in different dashboards, then it’s hard to unlock any useful insights from it.

Here, I could easily search “Marketing Crash Course” directly in ConvertKit’s dashboard to find all the related announcement emails. This allowed me to quickly pull up the relevant data. One of the emails was in MailerLite, so I then entered all the data in a spreadsheet so I could easily see it on a single screen:

Note: this is the same data that I’ve already shown a couple times, just re-posted here for convenience so you don’t have to scroll back up.

I looked at this past data first because it’s a 1:1 comparison to the 2023 data: these are all emails announcing the Book Marketing Crash Course. That makes them the most relevant to the situation at hand.

From this data, we can immediately see that both my initial theories were off base (content fatigue or too many links). The previous multi-link emails generated click rates between 13% to 18%. And the 2022 edition of the Crash Course had a 17% click rate, despite being released just 4 months after the 2021 edition to essentially the same size list (3,320 v. 3,220 subscribers). And those were the fourth and fifth times the Crash Course had been released.

While possible, it seems unlikely that the sixth time around would suddenly experience massive fatigue and cause the click rate to plummet from 17% to 10%. Especially when I added 550+ subscribers between the 2022 and 2023 versions being sent out (an increase of 16%).

It’s important to highlight how the various pieces of data provide a more accurate picture. Not just from having 6 different data points (8, if you include the BookBub guide data), but also the additional columns beyond subject line + open and click rate. Having something as simple as the date column provides invaluable additional context and paints a more complete story of what the numbers actually mean.

STEP 4: REFINE THE THEORY or FORM A NEW ONE

The final step (which can be repeated many times before actually reaching the finish line) is I refine my theory based on this additional data / information OR I go back to Step 2 and form a new theory. If the latter, then I repeat Steps 2, 3, and 4 until I find the answer or decide I can’t draw any reasonable conclusions (e.g., the answer is currently unknowable).

What happened: In our case here, I had to repeat the process since my initial theories didn’t make sense in context with the data. So that entailed:

  • Step 2, redux [new theory]: the low click rate might be caused by a subject line problem
  • Step 3, redux [look at more data]: I looked at the data for the Crash Course emails again, filtering them through the lens of my new theory. This often provides different insights, despite the actual data set remaining the same.

Looking through the data via the lens of the subject line theory, we can see a pattern here where the subject lines that focus solely on the crash course and don’t mention anything else have much higher click rates (18%, 13%, 19%, 17%). The 13% number throws a wrench in this pattern, though. But upon further inspection of the context, that lower number was because I had sent out the previous version of the Crash Course just a month before. So in that case, the click rate dip from 18% (March 2019) to 13% (April 2019) was content fatigue (people had just read the original version of the crash course and weren’t necessarily interested in reading an updated version a month later).

So I tossed out that 13% number as an outlier. Which meant that the remaining data for our singular focus subject lines all generated 17%+ click rates. But three data points isn’t exactly a huge amount of data to draw conclusions on.

Which meant that I needed to keep going deeper. I read or skimmed the old emails announcing the Crash Course to see if there was a difference in style, tone, or anything else that would account for a massive drop in clicks. This is an example of qualitative data, which can help explain why the numbers are better / worse. I also quantified an aspect of the email “style” through analyzing the word count to see if longer or shorter emails performed better (there wasn’t a clear pattern).

So I didn’t get any additional answers by looking at the directly related data. Having exhausted the analysis possibilities with these numbers, I looked for data from similar situations that weren’t 1:1 fits. One example of this was a guide to getting more BookBub deals that I’d sent out twice:

These emails followed the same format as the Crash Course emails (an email to my non-fiction list announcing the guide with a link to read it).

The BookBub guide subject lines both had a singular focus on the guide without trying to shoehorn any additional information in. We can see that the click rate both times it was sent cleared over 20%+ (23% the first time, 21% the second time).

This provides further evidence that the culprit behind the drop in Crash Course click rate from 17% (2022) to 10% (2023) was simply including multiple different topics in the subject line.

And, further disproving my original “too many link theory,” the BookBub guide email with the most links (3) outperformed the email with just 1 link. Note that this doesn’t suggest that more links improves click rates, merely that it wasn’t the cause of the declining click rates I saw for the 2023 Book Marketing Crash Course.

As for why the BookBub guide emails enjoyed higher click rates compared to the Crash Course? Well, two reasons are likely:

(1) the BookBub guide has only been sent out twice. So it was new to the entire list, whereas there are at least a handful of people each year who see the Crash Course and decided not to read it since they’ve checked out a previous version. This phenomenon is illustrated in the data above as well, where the click rate drops the second time the guide is sent (23% to 21%).

(2) more people are interested in that topic. Some books, ads, topics, etc. are just going to resonate more than others.

With the analysis done, let’s close things out with four takeaways.

TAKEAWAY #1: FOCUS YOUR SUBJECT LINE on ONE TOPIC (THAT ALIGNS WITH YOUR LINK)

This is simple, but as the data demonstrates, having a subject line with a singular focus that aligns with the link has a huge impact on click rates:

  • The 2022 Book Marketing Crash Course: 53% opens, 17% clicks (link: crash course)
  • 2023 Book Marketing Crash Course (last chance to save $200 on strategy course): 53% opens, 10% clicks (links: crash course, strategy course)

So the idea here is simple: the subject line topic and main link should align with one another. And even if you have multiple things to talk about in your newsletter, focus on one core thing in the subject line.

Because simply omitting 8 additional words from the subject line in 2022 produced in a click rate that was over 70% higher (since the 2022 rate is rounded down and the 2023 is rounded up).

This one best practice could be worth thousands of dollars over the course of your author career. That’s not an exaggeration; if you get 70% more clicks on each new release email, that has a massive compounding effect a five or ten year period.

Since I just mentioned compounding, let’s talk about that a bit more.

TAKEAWAY #2: DATA COMPOUNDS OVER TIME

Data is one of your three most valuable assets (the first two being your backlist (#1) and newsletter (#2). As we’ve seen from this case study, data can reveal insights that, in turn, make your other two main assets (backlist and newsletter) far more valuable.

Just as writing more books unlocks new marketing opportunities and your newsletter growing increases its marketing utility, your data’s usefulness compounds as you gather more of it.

Here, if I had just one or two data points, it’s likely I would have drawn the wrong conclusion. Only by releasing content over four years and analyzing the ensuing results was I able to uncover the (likely) reason behind the year-over-year drop in click rate.

Having more data allows you to spot patterns easier and filter out noise. But another benefit of having more data is simply that it helps fine tune your expectations.

Most people’s expectations are incorrectly calibrated, which leads to a lot of marketing mistakes. By analyzing data, you can make your expectations better aligned with actual reality. More data gives you more context for identifying patterns, particularly across situations that are similar (but not exactly the same). This is how you identify principles and strategies that work across multiple scenarios, rather than tactics that only apply to very limited circumstances.

But your data is only as useful as its availability. In this case, it was easily accessible within ConvertKit (and MailerLite). But what if I had used very different subject lines over the years, where I couldn’t search for the relevant emails easily? Or if we’re talking about a platform where the data is no longer accessible directly from the dashboard past a certain date (like Amazon Ads)?

So you need a tracking system that gives you easy access to your key numbers. Otherwise, you can’t dive into them further when the need arises. And you’re stuck analyzing a limited data set.

And, as we saw, having multiple data points, and being able to easily reference them, was crucial to generating our conclusions. Because if I had only looked at the 2023 email, it would have told me a certain story about the click rates and opens. Without additional context, I might have thought that a 10% click rate was good. Or maybe I would have been expecting a 30% click rate, and thought the performance was way worse than it actually was.

So the five other data points for the previous Crash Courses filled in a lot of gaps. And then the BookBub guide data was the missing piece that allowed me to reach a reasonably definitive conclusion (and you wouldn’t be reading this case study, because there would be nothing to report).

TAKEAWAY #3: SKILL COMPOUNDS OVER TIME

I occasionally look at past ads and think man, if I had a time machine and knew what I knew now…

Because a lot of those ads were much better than I thought at the time (and a handful were worse). Had my expectations been calibrated correctly, I could have gotten a ton more mileage out of the best ones. And killed the less effective ones.

But as the saying goes, you don’t know what you don’t know. The only way to get better is to actually do the thing. Analyze the data as best you can with what you know at the time.

And then rinse and repeat.

This applies to the theories you generate as well. The more we learn, the more different questions and theories we can come up with. That leads us to vastly different insights from the same data set.

Four years ago, it’s unlikely I would have asked whether the subject line was impacting the click rate. I used to view them as separate, mostly unrelated entities. Now I realize that the subject line pre-frames the entire email, setting the tone and expectations for the content within. Thus, it has a huge influence on what action someone takes after they open.

This is obvious now.

It wasn’t obvious then.

That’s just part of the game. It’s not something to lament; in fact, if you look back at what you were doing four years ago and wouldn’t change a single thing, or feel you were 100% right, then you probably haven’t been building many skills in the interim. That doesn’t mean everything you’re doing should change; just that you can now identify certain aspects that were wrong. Or, at the very least, could have been refined and executed better.

TAKEAWAY #4: DOING THE RIGHT THINGS WELL ALLOWS YOU to DO FEWER THINGS

One of the most important quotes that I’ve ever come across is from Red Blooded Risk. On page 296, the author Aaron Brown writes: “Having a few pieces of validated data is far better than having warehouses full of fiction.”

The reason most people don’t generate good analysis is simple: they’re focused on two things.

(1) Being right.

(2) And finishing their analysis as quickly as possible (while proving their original idea right).

Ironically, this ensures that they accumulate massive warehouses of fictional bullshit that is not only wrong, but costs them 100x, if not 1000x, more time on the back end (not to mention the financial aspect).

Fortunately, these are also correctable. A big part of that is just slowing down. Which is easier after realizing that it saves time in the long run.

One of the core principles underpinning my marketing philosophy is the 80/20 rule (that 20% of your actions drive 80% of results; the actual numbers can vary, but the principles is that a few key inputs generate most of the outputs). People often misinterpret this, thinking that it glorifies laziness or is all about looking for shortcuts or hacks.

But really, the 80/20 is about leverage. Because if you focus on getting good at the right things, and doing them well, then you vastly improve your results with the same (or less) effort.

As an example, this case study took me 5+ hours. That’s conservative; between all the analysis and everything else, breaking down (and then writing about) why the results for a single email were lower than expected probably took me 10+ hours.

On the surface, this looks like a bad deal.

Here’s the thing, though: by identifying this best practice (to focus my subject lines on a single topic), I have now saved myself a massive amount of wasted time on future content. Because if my subject line is wrong, it doesn’t matter how good the content is.

I can kill it right out of the gate with a bad subject line.

On the other hand, a tweak that takes me zero additional time can massively amplify my results.

Because if 70%+ more people are reading each piece of content that I write, that means I get a lot more mileage out of each one. Better engagement (because more people are actually checking out and reading the content). More people sharing and spreading the content via word of mouth.

That reduces the number of pieces of content I have to write. Or, alternatively, I could do the same amount of work, but massively increase the return on time and effort I’m getting from each piece.

Either way, I come out ahead.

And this has a huge compounding effect over months and years. Even if we’re conservative, and believe the true impact is only 30%+ (since the next lowest click rate was 13%), the impact is still massive. Because this best practice doesn’t apply to announcing just one piece of content. It applies to everything I’ll send out in the future, whether that’s in 2023 or 2043 (unless I find something more effective between now and then). Fiction or non-fiction.

In that context, spending 10 hours to figure out the subject line bombed the click rate is a steal.

Such is the true value of data and the analysis process detailed within this case study: it allows you to reduce your workload (or get more out of your existing work hours) and find those critical leverage points in your author business. And over a year, three years, ten years, a 70% increase in even just one area can compound into something that transforms your career.

That’s it.

Go sell some books (and keep your subject lines focused on one thing).

SHARE THIS GUIDE: