Lift test results in WorkMagic provide a clear, causal view of how your advertising contributes to performance. This article will walk you through how to interpret your results, understand the halo effect, and evaluate cross-channel impact—especially when measuring new customer acquisition.
1. How to Interpret Lift Test Results
The lift test results page is broken into two sections: Summary metrics and visual performance comparisons. These metrics reflect the difference between the Test group (exposed to ads) and the Control group (not exposed).
Key Metrics Explained
Metric | Definition |
Ad Spend | Total ad spend during test period. |
Incremental Orders | The difference between the test geo and control geo, adjusted to account for broader changes across all areas, reflecting the overall incremental impact. |
Incremental Contribution (%) | The incremental impact on the overall share of the selected metrics |
Cost per Incremental Order | The incremental changes on the overall cost per order. Calculated as ad spend / (incremental contribution * total orders) |
Confidence Score | The degree of confidence that the incremental lift is not due to random chance. A higher confidence score suggests a more reliable and meaningful lift. When it's over 90%, it means the lift is significant. |
Example: If your test shows 2,540 incremental orders at a 95% confidence level, it means you can be 95% confident that these orders were truly driven by the campaign—not by chance.
2. DTC vs. Halo Effect
What is the Halo Effect?
The Halo Effect captures lift that occurs outside the Direct-to-Consumer (DTC) store, such as:
Amazon
TikTok Shop
Example Interpretation
In your results view:
DTC result only (e.g., Shopify orders): 2,103 incremental orders
Total lift (DTC + Halo): 2,959 incremental orders
That means 856 additional orders came from halo channels such as Amazon—orders that would not have been captured in platform attribution or DTC-only analysis.
Why it Matters
Most ad platforms only report DTC conversions. Ignoring halo impact understates total lift, especially for brands with multi-channel distribution.
3. Viewing New Customer Lift
When your primary KPI is new customer acquisition, WorkMagic will also show:
Incremental New Customers
Cost per Incremental New Customer
Confidence Score
These metrics help you understand not just total order lift, but how much of that lift is driving net new customer growth, a key input for long-term ROI modeling.
4. Cross-Channel Impact
Lift tests can uncover spillover effects across other platforms and media types.
The Cross-Channel Table Displays:
Column | Description |
Test-Geo vs Control-Geo | Attributed results under data-driven attribution model |
Incremental Lift | The difference between test geo and control geo, reflecting the incremental impact as well as the credit that should have been attributed to the tested channel |
Confidence Score | The degree of confidence that the incremental lift is not due to random chance. A higher confidence score suggests a more reliable and meaningful lift. When it's over 90%, it means the lift is significant. |
Common Cross-Channel Patterns Observed:
Paid Social drives organic search or email uplift
Meta retargeting influences direct traffic or referral
The cross-channel insights also guide us on how to calibrate the attribution model. A synergy between channels detected by lift tests may also indicate that these channels have incorrectly taken credit for the tested channel’s impact under current attribution model.
5. Attribution Calibration
The results of a lift test represent your true incremental baseline. WorkMagic uses these findings to recalibrate attribution models.
Attribution Comparison Example
Attribution Type | Incremental Contribution (%) |
Lift Test (True Incrementality) | 2.88% |
Ad Platform Reported | 2.15% (understates by 28.93%) |
Data-Driven Attribution | 1.27% (understates by 55.9%) |
Last Click | 0.31% (understates by 89.33%) |
This comparison helps you understand:
Where your current attribution may be over- or under-valuing channels
How to weight results more accurately in your marketing mix modeling (MMM)
Why lift testing is a critical complement to traditional attribution tools
6. Confidence Score and Significance
Confidence scores indicate how likely it is that the lift observed was not due to random chance.
Score | Interpretation |
≥ 90% | Statistically significant result |
80–89% | Directionally reliable, but caution advised |
< 80% | Not statistically significant |
If your result is not significant, you may need to:
Increase test spend or duration
Reevaluate segmentation (e.g., group by product category)
Run additional experiments to improve precision
7. What To Do Next
Once you've reviewed your results:
Apply learnings to optimize campaign budget allocation
Use halo data to justify full-funnel or cross-channel investments
Work with your WorkMagic team to adjust attribution modeling or iMMM
Plan your next test based on newly calibrated benchmarks