Jun 27, 2022

Intro to Incrementality Series: Part III

Recap In Part 1 of our Intro to Incrementality series, we went through the basics of incrementality analysis. We talked about our two groups, test (or exposed) and control (or holdout), and how this type of analysis measures the lift caused by a specific variable of one over the other, thereby attaining a true measure of that…

digital-remedy-featured-img

Recap

In Part 1 of our Intro to Incrementality series, we went through the basics of incrementality analysis. We talked about our two groups, test (or exposed) and control (or holdout), and how this type of analysis measures the lift caused by a specific variable of one over the other, thereby attaining a true measure of that variable’s impact. In Part 2, we homed in on that control (or holdout) group and explained exactly how to go about creating that group through ghost bidding in a way that avoids skewing the incrementality analysis. So, we’ve covered the basics and we’ve made sure that by the time we get to our incrementality results, we can trust that there’s no skew or bias.

In this last installment, we will discuss the final, and perhaps most important, part of this process: putting this analysis to work in real time. To get to the heart of it, let’s look back on our initial basketball example:

An NBA basketball team has two players who both make 40% of their foul shots. The team wants to improve that percentage, so it decides to hire a shooting coach. It designs a test to evaluate that coach, and assigns him to only one of the two players. Both players are told to do everything else they had been doing, exactly as they had been doing it. For instance, they’re told to spend the same amount of time in the gym, keep asimilar diet, and maintain their same weight. After a year, the player who worked with the shooting coach makes 80% of his free throws the following season, while the player not assigned the shooting coach makes 50% of his free throws.

While this example has been useful in demonstrating the basics of incrementality, the test (the player assigned the coach), the control (the player not assigned the coach), and the new variable (the coach), it neglects some crucial real-world implications.

For starters, if something appears to be working, it doesn’t necessarily benefit the team to wait a whole season to evaluate exactly how well it’s working. If that coach got assigned to the control player mid-way through the season, the results might differ, but it’s quite likely the team could have boosted two players’ free throw percentage instead of one. It’s also worth taking a look at exactly what the coach is doing. Is it just the extra repetitions demanded by the coach that are causing the improvement? Is it an adjustment in form? Is it some mental component or confidence boost? Finally, and most importantly, this coach represents an investment. And the team is paying. The team has to determine if that investment is justified by the incremental improvement. If it is, it has to then determine whether should it increase that investment, and how. Our example simplifies a problem that, in basketball or marketing, is quite messy. The point of this final installment is to discuss cleaning up that mess.

Putting It All Together

1. In marketing, incrementality analysis should be ongoing and in real time. Marketers don’t have an unlimited budget or the luxury of conducting tests in a vacuum. It’s important to note that what incrementality results look like after two weeks might be different than what they look like after two months, but that doesn’t invalidate the two-week results. It’s a continual process of data collection and analysis that should inform decision making. What decision making? We’ll get there shortly.

2. Incrementality analysis is often conducted at the media type level. In our former marketing example, we discussed determining the incremental impact of adding a CTV campaign to a larger marketing mix. The reality is that the CTV campaign very likely consisted of several streaming services, maybe Sling, Hulu, and Pluto, and several creatives, maybe a:30 second creative and two :15 second creatives, across more than one audience, maybe an intent-based audience and a demographic-based audience. When we conduct this type of analysis, it’s important to get more granular than just the overall media type to unearth additional valuable insights.

3. This will come as no surprise to anyone, but paid media costs money. We cannot, and should not, treat this analysis as independent of cost.

CTV Test Campaign Example

How do we put this analysis to work in real time, granularly, and factoring in cost? We apply it to campaign optimization. Here’s another marketing example:

A brand decides to add a $1k CTV test to their marketing mix that previously consisted only of search and social media campaigns. The brand’s goal is to optimize toward the lowest cost-per-checkout (CPC) possible for its CTV campaigns. The brand has only one creative and is testing only one intent-based audience, but it doesn’t want to put all its eggs in one basket, so it decides to test three publishers, Sling, Hulu, and Pluto TV.

Most performance CTV vendors don’t report incremental conversions, so the brand observes the following checkouts and cost-per-checkout across the variables in the CTV campaign. 

Any brand that sees those results would think: “well, it looks like Sling is the best, we should put more budget there and less budget in Hulu and Pluto TV”—and, in a vacuum, the brand would be absolutely correct. But media doesn’t work in silos, it works across silos, and the brand is also running search and social, plus, it’s got all this organic demand it worked so hard to build up.

The brand, knowing this, decides to add incrementality analysis as an additional data point, and it finds that Sling’s incrementality percentage is 10%, Hulu’s is 80%, and Pluto TV’s is 50%. In other words, it finds that 90% of conversions recorded from Sling would have happened despite those Sling exposures, 20% of conversions recorded from Hulu would have happened despite those Hulu exposures, and 50% of those conversions recorded from Pluto TV would have happened despite those Pluto exposures. 

This is worrisome and tricky when it comes to future budget allocation. What the brand is seeing is a commonplace occurrence in the marketing world: conversions reported by platforms are duplicative because each platform the brand operates in works only with the media it runs. So the brand’s CTV vendor takes credit for its social media conversions, the brand’s search vendor takes credit for its CTV conversions, and so on. 

But the brand has a secret weapon: incrementality-informed optimization. Instead of using only CPA metrics, the brand can, very simply, apply the incrementality analysis to the cost-per-checkout and the result is a new metric: cost-per-incremental-checkout (iCPC). By multiplying the number of checkouts and the incrementality percentage, the brand unearths the below results:

The takeaway? Without incrementality analysis applied to the brand’s performance numbers, it would have been optimizing its CTV campaign in a way that was actually counter to its bottom line, toward conversions that would have happened anyway. By adding this additional analysis, it can actually see it would be best served spending more on Hulu, and that the top performer from only a CPC standpoint (Sling) actually finishes well behind the top performer from an iCPC standpoint (Hulu).

In Conclusion

As brands get smarter with their budget allocation across and within media types, incrementality analysis becomes a crucial stepping stone on the path to profitability and cross-channel ROAS

Flip, our performance CTV platform, not only offers incrementality analysis but also allows brands the option to leverage incrementality-informed optimization strategies, ensuring that their CTV dollars are getting put to work efficiently within a cross-channel media mix. To learn more speak to a member of our team today.