As the direct-to-consumer (DTC) space becomes increasingly competitive, marketers are under constant pressure to understand campaign performance and prove the value of their efforts. Now accountable for driving outcomes through their campaigns rather than building general awareness among consumers, many brands are rethinking their media mix—exploring new channels that can effectively deliver on their campaign goals and maximize their budget.

Digital Remedy partnered with Dynata, the world’s largest first-party data company, to field a marketing-focused survey with the goal of gaining a deeper understanding of DTC marketers’ top priorities, current campaign practices, and media partner preferences.

Download our report, which provides valuable insights, including:

For the latest industry trends and insights, check out our blog and sign up to receive our monthly newsletter.

Whether you’re a DTC marketer just getting started in the CTV/OTT space or looking to take your current campaigns to the next level, Digital Remedy is here to help. Discover how your brand can garner greater brand impact, generate more outcomes, and turn TV into a powerful performance channel. For additional information, visit www.digitalremedy.com/ott-ctv or schedule a demo to see our award-winning platform in action.

Get the Report

The Balancing Act: Awareness vs. Performance Marketing

Performance marketing has grown in popularity over the last decade, as marketing budgets have been slashed to maximize return on investment. While brand awareness is important, many marketers are focused on driving (and measuring) bottom-funnel actions, such as website visits, in-store visits, and purchases. Improvements in measurement for once-considered upper-funnel media are coming fast and furious. These improvements show that lower-funnel media can have branding impacts, and upper-funnel media can have performance impacts.

What is Performance TV?

In short, more measurable real-world results and more granular reporting for marketers. Performance TV allows marketers to deliver ads to target audiences, measure campaign performance, and attribute bottom-funnel results. Two main benefits of performance TV are the ability to:

  1. Deterministically or definitively, track conversions from your campaign
  2. Optimize those campaigns away from what doesn’t work toward what does—to drive better performance

Performance TV advertising is done through connected TV (CTV) devices that help to attribute and report on those campaigns. Performance CTV provides a unique opportunity for marketers to reach highly-engaged audiences.

The Rise of CTV

CTV offers the high-impact, brand storytelling power of traditional TV plus the targeting, analytics, and interactivity of digital to provide a compelling environment for audiences to engage with messaging alongside premium content.

“Linear TV and CTV are converging; however, similar to the shifting holiday season, which is promoting earlier shopping each year, that doesn’t mean it has made what to buy, where to buy, and whether or not you have the best deal clear for media buyers (and consumers), which is the case for advanced TV.”

Matt Sotebeer, Chief Strategy Officer, Digital Remedy

For brands looking for new ways to maximize their marketing efforts, CTV is the perfect channel.

“We’re seeing brands start to gear up for Black Friday and Cyber Monday and look toward new channels to leverage. CTV is definitely top-of-mind for these advertisers as long as their investment can be backed up by performance. This makes attribution and optimization on this channel more important than ever.”

– Ben Brenner, VP of Business Development & Strategy, Digital Remedy

CTV’s ability to merge the often-separated performance and brand marketing worlds—including its inherently addressable nature—is redefining the digital ad space and giving marketers a way to take their campaign measurement to the next level.

How Digital Remedy Can Help

Finding the right performance CTV partner can make all the difference in optimizing your media strategy and maximizing ROAS in this fast-growing, highly-profitable market. While many ad tech vendors offer different solutions, not all of them have the full scope of resources to make the most of advertising on this medium. Digital Remedy offers first- and third-party data integrations, direct access to premium OTT publishers, real-time optimization, and granular, transparent bottom-funnel reporting through Flip, our performance OTT stack.

Digital Remedy provides comprehensive campaign performance reporting and data-driven capabilities to help advertisers and agencies connect with target audiences at the best time.With Flip, brands gain access to a sophisticated reporting dashboard to monitor their campaigns and real-time performance insights to optimize towards the KPIs that matter most. Flip provides a new standard in tracking, transparency, and results, including:

With these valuable insights, marketers can make more effective optimizations and investment decisions—to effectively grow their business and drive measurable campaign performance by leveraging the biggest screen in the home to deliver brand messaging.

Download the full report for full insights, including myths surrounding CTV. Interested in learning more? Watch our Digital Dish episode or speak to a member of our team.

Recap

In Part 1 of our Intro to Incrementality series, we went through the basics of incrementality analysis. We talked about our two groups, test (or exposed) and control (or holdout), and how this type of analysis measures the lift caused by a specific variable of one over the other, thereby attaining a true measure of that variable’s impact. In Part 2, we homed in on that control (or holdout) group and explained exactly how to go about creating that group through ghost bidding in a way that avoids skewing the incrementality analysis. So, we’ve covered the basics and we’ve made sure that by the time we get to our incrementality results, we can trust that there’s no skew or bias.

In this last installment, we will discuss the final, and perhaps most important, part of this process: putting this analysis to work in real time. To get to the heart of it, let’s look back on our initial basketball example:

An NBA basketball team has two players who both make 40% of their foul shots. The team wants to improve that percentage, so it decides to hire a shooting coach. It designs a test to evaluate that coach, and assigns him to only one of the two players. Both players are told to do everything else they had been doing, exactly as they had been doing it. For instance, they’re told to spend the same amount of time in the gym, keep asimilar diet, and maintain their same weight. After a year, the player who worked with the shooting coach makes 80% of his free throws the following season, while the player not assigned the shooting coach makes 50% of his free throws.

While this example has been useful in demonstrating the basics of incrementality, the test (the player assigned the coach), the control (the player not assigned the coach), and the new variable (the coach), it neglects some crucial real-world implications.

For starters, if something appears to be working, it doesn’t necessarily benefit the team to wait a whole season to evaluate exactly how well it’s working. If that coach got assigned to the control player mid-way through the season, the results might differ, but it’s quite likely the team could have boosted two players’ free throw percentage instead of one. It’s also worth taking a look at exactly what the coach is doing. Is it just the extra repetitions demanded by the coach that are causing the improvement? Is it an adjustment in form? Is it some mental component or confidence boost? Finally, and most importantly, this coach represents an investment. And the team is paying. The team has to determine if that investment is justified by the incremental improvement. If it is, it has to then determine whether should it increase that investment, and how. Our example simplifies a problem that, in basketball or marketing, is quite messy. The point of this final installment is to discuss cleaning up that mess.

Putting It All Together

1. In marketing, incrementality analysis should be ongoing and in real time. Marketers don’t have an unlimited budget or the luxury of conducting tests in a vacuum. It’s important to note that what incrementality results look like after two weeks might be different than what they look like after two months, but that doesn’t invalidate the two-week results. It’s a continual process of data collection and analysis that should inform decision making. What decision making? We’ll get there shortly.

2. Incrementality analysis is often conducted at the media type level. In our former marketing example, we discussed determining the incremental impact of adding a CTV campaign to a larger marketing mix. The reality is that the CTV campaign very likely consisted of several streaming services, maybe Sling, Hulu, and Pluto, and several creatives, maybe a:30 second creative and two :15 second creatives, across more than one audience, maybe an intent-based audience and a demographic-based audience. When we conduct this type of analysis, it’s important to get more granular than just the overall media type to unearth additional valuable insights.

3. This will come as no surprise to anyone, but paid media costs money. We cannot, and should not, treat this analysis as independent of cost.

CTV Test Campaign Example

How do we put this analysis to work in real time, granularly, and factoring in cost? We apply it to campaign optimization. Here’s another marketing example:

A brand decides to add a $1k CTV test to their marketing mix that previously consisted only of search and social media campaigns. The brand’s goal is to optimize toward the lowest cost-per-checkout (CPC) possible for its CTV campaigns. The brand has only one creative and is testing only one intent-based audience, but it doesn’t want to put all its eggs in one basket, so it decides to test three publishers, Sling, Hulu, and Pluto TV.

Most performance CTV vendors don’t report incremental conversions, so the brand observes the following checkouts and cost-per-checkout across the variables in the CTV campaign. 

Any brand that sees those results would think: “well, it looks like Sling is the best, we should put more budget there and less budget in Hulu and Pluto TV”—and, in a vacuum, the brand would be absolutely correct. But media doesn’t work in silos, it works across silos, and the brand is also running search and social, plus, it’s got all this organic demand it worked so hard to build up.

The brand, knowing this, decides to add incrementality analysis as an additional data point, and it finds that Sling’s incrementality percentage is 10%, Hulu’s is 80%, and Pluto TV’s is 50%. In other words, it finds that 90% of conversions recorded from Sling would have happened despite those Sling exposures, 20% of conversions recorded from Hulu would have happened despite those Hulu exposures, and 50% of those conversions recorded from Pluto TV would have happened despite those Pluto exposures. 

This is worrisome and tricky when it comes to future budget allocation. What the brand is seeing is a commonplace occurrence in the marketing world: conversions reported by platforms are duplicative because each platform the brand operates in works only with the media it runs. So the brand’s CTV vendor takes credit for its social media conversions, the brand’s search vendor takes credit for its CTV conversions, and so on. 

But the brand has a secret weapon: incrementality-informed optimization. Instead of using only CPA metrics, the brand can, very simply, apply the incrementality analysis to the cost-per-checkout and the result is a new metric: cost-per-incremental-checkout (iCPC). By multiplying the number of checkouts and the incrementality percentage, the brand unearths the below results:

The takeaway? Without incrementality analysis applied to the brand’s performance numbers, it would have been optimizing its CTV campaign in a way that was actually counter to its bottom line, toward conversions that would have happened anyway. By adding this additional analysis, it can actually see it would be best served spending more on Hulu, and that the top performer from only a CPC standpoint (Sling) actually finishes well behind the top performer from an iCPC standpoint (Hulu).

In Conclusion

As brands get smarter with their budget allocation across and within media types, incrementality analysis becomes a crucial stepping stone on the path to profitability and cross-channel ROAS

Flip, our performance CTV platform, not only offers incrementality analysis but also allows brands the option to leverage incrementality-informed optimization strategies, ensuring that their CTV dollars are getting put to work efficiently within a cross-channel media mix. To learn more speak to a member of our team today.

In our first Intro to Incrementality piece, we explained what incrementality analysis is and how marketers can use it to make more informed decisions about what success means for their media campaigns. We mentioned how, at the very heart of this analysis, there is a comparison between two groups:

In this part of our incrementality series, we talk specifically about that second group, diving deeper into how to create a control group and what it should be composed of for unskewed analysis.

Real-World Example

Remember our basketball example? Here’s a quick refresher:

An NBA basketball team has two players who both make 40% of their foul shots. The team wants to improve that percentage, so it decides to hire a shooting coach. It designs a test to evaluate that coach and assigns him to only one of the two players. Both players are told to do everything else they had been doing, exactly as they had been doing it. For instance, they’re told to spend the same amount of time in the gym, keep a similar diet, and maintain the same weight. 

Player A: 40% of free throws ┃ Player B: 40% of free throws

After a year, the player who worked with the shooting coach makes 80% of his free throws the following season, while the player not assigned the shooting coach makes 50% of his free throws.

Player A: 50% of free throws ┃ Player B: 80% of free throws

There are some very important concepts embedded in this example but two are chief among them. First, both players make 40% of their shots. The observed behaviors of both groups are similar, if not the same. Second, everything else going on—except the shooting coach—remains the same. No after-practice shots allowed, no extra weight room, etc. Allowing additional variables into the equation creates noise that can skew or obscure incrementality analysis.

Controlling The Control

Bringing this back to marketing, we’re left with this challenge: how can we create a control group where the observed or expected behaviors match the exposed group, and how can we limit the noise around this analysis? Prior to 2020, there were three common methods for creating control (or holdout) groups for marketing and advertising campaign analysis:

1. Market A/B Testing 

Pick two similar markets, turn media or a certain type of media on in one market, leave it off in the other, and observe behavior in each. The issue with market A/B testing is that no two markets behave exactly alike, no two markets contain populations that are comprised of the exact same demographics and interests, and the background noise can reach dangerous levels.

2. Random Suppressed Groups

Using existing technology and cookie, IP, or Mobile Ad ID (MAID) lists, agencies and advertisers were able to actively hold out randomly generated subsets of people from seeing ads. However, as soon as targeting is applied to the campaign, the analysis skews heavily toward the exposed group. For example, imagine you’re selling men’s running shoes and you want to run incrementality analysis. Your campaign is targeting men, but your randomly-generated control is observing men and women equally. Your exposed group is going to perform better because half the control group can’t even buy your product.

3. PSA Ads

The exposed group, with all its relevant demographic and behavioral targeting, is served ads for the brand. Meanwhile, that same demographic and behavioral targeting is applied to a different group, and instead of ads for the brand, they’re served ads for public service or completely unrelated products or activities. The two groups remain separate and distinct but mirror each other in behaviors essentially (e.g. back to the basketball example: they both shoot 40% at the line). This allows us to see a much-less-skewed, noise-filled outcome of the incrementality test running.

The main drawback of PSA holdouts is that someone has to pay for that PSA ad to be served. Impressions don’t come free, and in order to run this analysis, impressions must actually be purchased to run PSA ads on, so a substantial portion of the advertising budget is going toward actively NOT advertising! Additionally, when a PSA is served, it actually prevents competitive brands from winning that same impression and possibly swooping in on a customer, which might be good for business but is very bad for statistics—and creates a skew toward the exposed group.

A More Sophisticated Approach

In 2020, a new method for creating a holdout emerged called ghost bidding, which allows media buyers to create a holdout by setting a cadence against individual line items or ad groups. The cadence dictates how often the buying platform will send an eligible bid into the control group, instead of actually bidding on the impression. So, at a ghost bid cadence of 20%, for every 5 impressions bid on, 1 user receives a “ghost bid”, or gets placed in a holdout group where it cannot receive ads for the duration of the campaign.

Because the ghost bid isn’t an actual impression, the ad space remains open and the “ghost bid” is free. Additionally, because it’s set against the line item or ad group itself, all the targeting matches the exposed group exactly and all optimization on the exposed group follows suit on the control group. If Millennial Urban Men are converting at a higher rate and receiving more budget in the exposed group, they’re also receiving more ghost bids, making sure the composition of the control follows suit.

Ghost bidding addresses the pitfalls of market tests, random samples, and PSA’s while remaining inexpensive and relatively easy to implement. As a result, ghost bidding has become increasingly popular with sophisticated marketers and media buyers. It is Digital Remedy’s preferred method for creating holdouts for our Flip product.

In our next piece of this series, learn why optimizing incrementally is a crucial capability for brand marketers and agencies looking to drive tangible bottom-line results. Can’t wait? Check out our full insights report or speak to a member of our team today.

Welcome to the first part of our Intro to Incrementality series, which uses real-world examples to explain how incrementality analysis and incremental lift can help brands and agencies isolate the effect of a particular media type or campaign variable in order to accurately measure its impact. Warning: this post involves (basic) math.

Isolating the Impact

As a marketer, the line between correlation and causation can get very blurry. In the modern ad landscape, brands run ads across many different media channels—from paid search to OTT/CTV, display to streaming audio. Customers are often exposed to several different media types—and several different ads within each media type—before making a purchase or converting. Therein lies the challenge: How do marketers know which specific ads, or which specific campaigns actually drove the conversion, when so many other campaigns and ads are running concurrently? How do they separate an individual channel from the rest, or from their own organic or native demand?

Enter incrementality. At its heart, this type of analysis compares a control (or unexposed group) against a test (or exposed group), measuring the difference between the two groups. The test group is an apt name, we’re introducing a new variable into the equation, whether that variable is a campaign, a publisher, a creative, or something else, and our hypothesis is that this variable will result in additional conversions. To measure that hypothesis, we need a baseline for what “additional” means, and that’s where the holdout group comes in—the holdout group establishes the floor above which we wish to measure.

Real-World Example

Let’s use a non-marketing example to understand how this might work. An NBA basketball team has two players who both make 40% of their foul shots. The team wants to improve that percentage, so it decides to hire a shooting coach. It designs a test to evaluate that coach, and assigns him to only one of the two players. Both players are told to do everything else they had been doing, exactly as they had been doing it. For instance, they’re told to spend the same amount of time in the gym, keep a similar diet, and maintain their same weight.

2020–21 Season:
Player A: 40% of free throws ┃ Player B: 40% of free throws

After a year, the player who worked with the shooting coach makes 80% of his free throws the following season, while the player not assigned the shooting coach makes 50% of his free throws. In this test, we have a new baseline (or control/holdout) of 50%, and a new variable introduced in the shooting coach, resulting in a new test result of 80%. Both players improved, but one player improved more, so how do we measure that incremental improvement and attribute it to the shooting coach?

2021–22 Season:
Player A (control): 50% of free throws ┃ Player B with Coach (test): 80% of free throws

To measure the impact of the shooting coach, we want to answer two questions:

1. After introducing the shooting coach, how much more likely is Player B to make a foul shot? We call this incremental lift. To answer this question, we use the formula:

(test: 80% – holdout: 50%) / holdout: 50%
The result is 60%

A lift of 60% equates to a multiple of 1.6. In other words, Player B is now 1.6 times as likely to make his foul shots. So, we can accurately say that Player B, after working with the shooting coach, was 60% more, or 1.6 times as likely, to make a free throw than he was without working with the shooting coach.

60% more likely to make a free throw

2. Of all the free throws Player B now makes (that new 80% figure), what percentage can we attribute to the shooting coach? The formula for incrementality is:

(test: 80% – holdout: 50%) / test: 80%
This result is 37.5%

Leveraging incrementality, we can determine the shooting coach is directly responsible for 37.5% of the shots made after he began working with the player,—that 37.5% of the shots would have otherwise been missed if Player B and the coach never worked together.

Back to the Subject

Let’s bring this concept back to marketing. Let’s replace the NBA players with a brand running search and social media campaigns. It observes two subsets of its consumers who are exposed to search and social media campaigns and finds that both groups, who resemble each other behaviorally and demographically, convert at .1%, or, put another way, of those consumers observed who were exposed to search and/or social media, one in every thousand buy its product.

The brand has a hypothesis: if the brand introduces CTV advertising into its media mix, the conversion rate will increase. In this example, OTT/CTV campaigns are standing in for our shooting coach.

So the test begins: the brand starts running OTT/CTV campaigns on one of the two subsets of consumers. For both groups, it keeps social media and search spending and campaigns static. At the end of the campaign, the brand analyzes the data and discovers that the group who was exposed to OTT/CTV ads in addition to search and social media ads converts at .2% (test) while the group exposed only to search and social media ads converts at .15% (holdout)

Using incremental lift analysis, we can determine that OTT/CTV was responsible for a 33.33% incremental lift in purchases

(test: .2% – holdout: .15%) / holdout: .15% = 33.33%

and has an incrementality percentage of 25%

(test: .2% – holdout: .15%) / test: .2% = 25%

In other words: Those exposed to the brand’s OTT/CTV campaigns are 33.33% more—or 1.33 times as—likely to purchase than those not exposed. It means OTT/CTV campaigns are directly responsible for 25% of all conversions recorded.

Incrementality aims to answer the question, would this person have converted otherwise if not exposed to a particular campaign or campaign variable? Just like the shooting coach, the OTT/CTV campaign provided a needed boost in conversion rate. By using incrementality analysis instead of simply looking at the new conversion rate, or the new number of conversions, the brand is able to isolate and measure that boost and understand more clearly the impact of every new variable it introduces into its marketing mix.

Coming up in our next post, we’ll discuss how brands and agencies can create and leverage holdout (or control) groups. Can’t wait? Check out our full insights report or speak to a member of our team today.

The CTV/OTT space presents many advantages for B2B marketers, including clear targets, ideal buyer profiles, and precise targeting strategies. However, B2B marketers are feeling increased pressure to prove the value of CTV/OTT in reaching their target audience and without the right digital attribution, it can be tough to prove success and maximize return on ad spending (ROAS).

TJ Sullivan, SVP of Sales at Digital Remedy was recently featured in MarketingProfs, where he discussed digital attribution in the OTT/CTV space. Specifically, TJ outlined five ways to succeed with CTV/OTT, improve ROAS, and boost your bottom line:

Use multi-touch attribution

Multi-touch attribution for CTV/OTT gives you the ability to see the customer journey beyond just first- and last-touch—which is especially crucial for B2B marketers looking to understand the full customer journey.

Factor in incrementality

CTV/OTT metrics let you dive into the details to understand which tactics moved the needle with the right people in the right way. Factoring in incrementality lets you see the real picture by taking all attributed conversions and filtering out ones that the campaign didn’t directly drive.

Match strategy to sales insights

Good B2B marketers know how to recognize buy signals—those actions and behaviors that indicate a prospect is ready to buy. Being able to recognize those “telling” signals and match them to the strategy—optimizing creative, targeting, and placement to drive those signals—is a key advantage in CTV/OTT advertising when it’s done right.

Use actionable CRM data to segment and target

The beauty of CTV is that it enables real-time optimization of ad buys in comparison with Linear, which means you are able to move your budget toward the publishers, creatives, dayparts, audiences, and geographies that are working to drive the most action to fully maximize your spend.

Use real-time feedback to target in real time

When you combine multi-touch attribution with real-time performance metrics, it creates a powerful platform to fully optimize targeting and content delivery.

Work with a trusted partner to achieve success 

While it may be relatively new to some marketers, multi-touch attribution must become a top priority, as it allows you to not only identify the specific touchpoints throughout the consumer journey that triggered the desired action, but also leverage valuable performance insights to optimize future campaigns. With the Flip OTT performance platform, Digital Remedy is making it accessible and simple for brands and marketers at all levels to win in the OTT/CTV space. To learn how you can leverage alternative attribution methodologies within our award-winning CTV platform, visit www.digitalremedy.com/flip.

Check out TJ’s full insights on MarketingProfs and be sure to follow Digital Remedy on LinkedIn and Twitter for the latest updates.

Digital Remedy recently hosted a Tech-Talk webinar “To Last Touch & Beyond: Measuring Performance CTV” through eMarketer, explaining why marketers should move beyond last-touch attribution methodology, focusing instead on more nuanced ways to drive bottom-line results via OTT/CTV—and why working with an experienced media partner is crucial in today’s complex ad space. If you missed it, here’s a recap:

 

Key Takeaways

 

The marketing funnel is collapsing and CTV offers a unique ad environment and full-funnel measurement capabilities.

As the duopoly becomes increasingly saturated, improvements in measurement have proved that lower-funnel media can have branding impacts, and upper-funnel media can have performance impacts—and this is most evident in the CTV space. CTV’s ability to merge the often separated performance and brand marketing worlds is redefining the digital ad space.

 

As the consumer buying journey evolves, marketers need to think about more enhanced measurement, including sophisticated attribution methodologies.

In the modern age of marketing, the typical consumer requires an average of 56 touchpoints before making a purchase. Very rarely does someone convert after a single ad exposure. Marketers need to take every touchpoint (across devices and platforms) into consideration and assign credit accordingly. While last-touch attribution has long been the status quo of the ad space, marketers can discover more insights by expanding their attribution methodology.

 

While VCR has long been the go-to metric for marketers, it is no longer the standard for evaluating campaign performance.

While innovations in attribution have brought the focus away from VCR and toward ROAS and CPA metrics, those goals are merely scratching the surface of CTV measurement. Many marketers are looking to optimize their CTV campaign performance, but don’t know where to start. Working with the right media partner will allow you to uncover valuable—previously unattainable—campaign performance insights.

 

You can watch the full presentation on-demand or view the slides. Interested in learning how you can start driving bottom-line results? Schedule a custom Flip demo to see our award-winning CTV performance platform in action.

 

 

For the latest industry trends and insights—including more on how to leverage multiple attribution methodologies and incrementality—check out our blog and sign up to receive our monthly Trends newsletter delivered right to your inbox.