What Is Incrementality?

Incrementality measures the true causal lift a marketing activity produces on a desired outcome, such as purchases, sign-ups, or installs. The measure counts only what happened above and beyond what would have occurred without the campaign. It answers one question: did this ad actually cause the conversion, or would the customer have converted anyway?

This distinction matters enormously. Attribution models assign credit to touchpoints a customer encountered before converting. Incrementality testing determines whether those touchpoints had any real effect. A customer who sees a retargeting ad and buys a product they had already decided to purchase represents no incremental value. How attribution credits the click is irrelevant.

The Incrementality Formula

The core calculation compares conversion behavior between an exposed group (users who saw the ad) and a holdout control group (users who did not):

Metric Formula
Incremental Conversions Conversions (Exposed) − Conversions (Control)
Incremental Lift % (CVR Exposed − CVR Control) / CVR Control × 100
iROAS Incremental Revenue / Ad Spend
Cost Per Incremental Conversion Ad Spend / Incremental Conversions

Example: A retailer runs a Facebook campaign targeting 500,000 users. A randomized 10% holdout of 50,000 users sees no ads. After 30 days, the exposed group converts at 4.2% and the control group at 3.8%. The incremental lift is 10.5% ((4.2 − 3.8) / 3.8). If the campaign spent $200,000 and generated $1.2 million in total revenue, only the incremental portion is causally attributable to the ads. That works out to roughly 9.5% of revenue, or $114,000, giving an iROAS of $114,000 / $200,000 = 0.57. The campaign destroyed value despite appearing profitable under last-click.

Why Incrementality Exposes Attribution’s Blind Spots

Standard attribution models overcount conversions that would have happened organically. Three failure modes are most common:

  • Retargeting inflation: Retargeting audiences consist largely of users already deep in the purchase funnel. Showing them an ad frequently claims credit for a sale they were already going to make.
  • Brand keyword cannibalization: Paid search ads on branded terms often intercept users who would have found the site through organic search anyway. Google’s own internal research suggests 50–80% of branded paid clicks represent zero incremental traffic for established brands.
  • High-intent audience bias: Algorithmic delivery systems optimize for users most likely to convert, which naturally skews toward users who would convert regardless of ad exposure.

Procter & Gamble’s former chief brand officer, Marc Pritchard, made incrementality testing a cornerstone of the company’s media overhaul beginning in 2017. P&G had cut more than $200 million in digital spend and found that sales were unaffected in most markets. The exercise revealed that a significant portion of attributed digital conversions were not incremental.

How Incrementality Tests Work

Ghost Bids (Synthetic Holdouts)

In programmatic environments, ghost bidding withholds winning impressions from a randomly selected control group while still recording that a bid would have been placed. This preserves audience comparability without wasting media budget. Meta’s Conversion Lift product and Google’s Conversion Lift Studies use variants of this methodology.

Geo-Based Holdouts

Geographic holdout tests pause or reduce spend in matched market pairs (for example, Portland versus Denver, matched on population, income, and baseline conversion rate) and compare outcomes. This method suits television, out-of-home, and any channel where user-level holdouts are technically difficult. Airbnb and DoorDash have both published geo-holdout frameworks for measuring offline media incrementality.

Time-Based Tests

Pulsing tests alternate campaign flights on and off across time periods. They are simpler to execute but more prone to confounding seasonal or competitive factors, and are generally considered less rigorous than randomized user or geo holdouts.

Incremental ROAS vs. Reported ROAS

The gap between reported ROAS and incremental ROAS (iROAS) is often substantial. A retargeting campaign might report a 6x ROAS on a last-click basis while delivering an iROAS of 1.2x once organic converters are stripped out. The threshold for a positive iROAS varies by business model: a subscription company with high customer lifetime value may accept an iROAS below 1.0 during an acquisition phase, while a low-margin e-commerce brand may require 2.0x or higher to justify spend.

iROAS Benchmarks by Channel (Approximate Industry Ranges)

Channel Typical Reported ROAS Typical iROAS
Branded Paid Search 8–20x 0.5–2x
Retargeting Display 4–10x 1–3x
Prospecting Social 1.5–4x 1.2–3.5x
Connected TV Hard to measure 0.8–2x
Non-Branded Paid Search 3–7x 2–5x

Incrementality and Media Mix Modeling

Incrementality tests and marketing mix modeling (MMM) are complementary, not competing, measurement approaches. Incrementality tests produce high-precision causal estimates for individual channels or campaigns but require sufficient traffic and budget to achieve statistical significance. MMM analyzes aggregate spend and revenue data across all channels over time to estimate relative contribution at a portfolio level.

Best-in-class measurement programs use incrementality test results to calibrate MMM coefficients, correcting the model for known biases in specific channels. Meta’s Robyn open-source MMM library includes an incrementality calibration input specifically for this purpose.

Statistical Requirements

A valid incrementality test requires a large enough sample to detect a meaningful effect. Small differences in conversion rates demand large audiences. A test seeking to detect a 5% lift at 80% statistical power with a 3% baseline conversion rate typically requires 250,000 or more users per group. Tests run on small budgets or niche audience segments frequently produce underpowered results and should not be acted on. Minimum detectable effect (MDE) calculators help determine whether a test is feasible before it begins.

Applying Incrementality Findings

Incrementality data typically informs three decisions:

  1. Budget reallocation: Channels or tactics with low iROAS are reduced in favor of higher-incrementality spend. This often means cutting branded search and retargeting in favor of prospecting.
  2. Audience exclusions: Users who would convert organically (frequent visitors, loyalty members, recent purchasers) can be excluded from paid targeting to avoid paying for conversions that would have happened anyway.
  3. Bid strategy calibration: iROAS targets replace reported ROAS targets in automated bidding algorithms, preventing systems from over-spending on easily attributed but non-incremental conversions.

Incrementality measurement connects marketing investment directly to causal business impact, making it a foundation of rigorous performance marketing alongside A/B testing and attribution analysis.

Frequently Asked Questions About Incrementality

What does incrementality mean in marketing?

Incrementality refers to the measurable causal lift a marketing activity produces on a business outcome. An incremental conversion is one that would not have occurred without the ad exposure. If a customer would have purchased regardless, that conversion is not incremental and should not be credited to the campaign.

How is incrementality different from attribution?

Attribution assigns credit to touchpoints a customer encountered before converting. Incrementality testing determines whether those touchpoints had any real causal effect. A channel can receive full attribution credit and still have zero incrementality if the conversions would have occurred through organic channels anyway.

What is iROAS and how does it differ from reported ROAS?

iROAS (incremental return on ad spend) divides only the revenue caused by the campaign by total ad spend, stripping out conversions that would have happened organically. Reported ROAS counts all attributed revenue. An iROAS below 1.0 means the campaign destroyed value, even when reported ROAS looks strong.

How many users do you need for an incrementality test?

A test designed to detect a 5% lift at 80% statistical power with a 3% baseline conversion rate typically requires 250,000 or more users per group. Tests run on smaller budgets or niche audience segments are often underpowered and should not be acted on. Use a minimum detectable effect (MDE) calculator before committing to a test design.

Which marketing channels tend to have the lowest incrementality?

Branded paid search and retargeting display consistently show the largest gaps between reported ROAS and true iROAS. Branded paid search may report 8–20x ROAS but deliver only 0.5–2x iROAS because most of those clicks would have arrived via organic search anyway. Non-branded paid search and prospecting social tend to deliver stronger incremental results.