A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook
Brett R. Gordon, Florian Zettelmeyer, Neha Bhargava, and Dan Chapsky, 2018, 18-113-05
Despite the availability of granular data, measuring the causal effects of digital advertising remains challenging. Advertising explains only a small amount of variation in outcomes, and even small amounts of advertising endogeneity (e.g., likely buyers are more likely to be exposed to the ad) can severely bias causal estimates of its effectiveness. In principle, these issues could be addressed using randomized controlled trials (RCTs). In practice, few online ad campaigns rely on RCTs, and instead use observational methods to estimate ad effects.
In this study, Brett Gordon, Florian Zettelmeyer, Neha Bhargava, and Dan Chapsky explore how well observational approaches to measurement fare against randomized controlled trials on the same marketing campaigns. This analysis is of particular interest because of recent, large improvements in observational methods for causal inference.
Using data from 15 U.S. advertising experiments at Facebook comprising 500 million user-experiment observations and 1.6 billion ad impressions, they implement a variety of matching and regression-based methods and compare their results with those obtained from the RCTs.
They find a significant difference in the ad effectiveness obtained from RCTs and from observational approaches. Generally, the observational methods overestimate ad effectiveness relative to the RCT, although in some cases, they significantly underestimate effectiveness. The bias can be large: in half of the studies, the estimated percentage increase in purchase outcomes was off by a factor of three across all methods.
With the small number of studies, the authors could not identify campaign characteristics that were associated with strong biases. They also found that observational methods do a better job of approximating RCT lift for registration and page-view outcomes than for purchases. Finally, they do not find that one method consistently dominates. Instead, a given approach may perform better for one study but not another.
These findings shed light on whether - as is thought in the industry - observational methods using good individual-level data are “good enough” for ad measurement, or whether even good data prove inadequate to yield reliable estimates of advertising effects. Their results support the latter conclusion.
Brett R. Gordon is Associate Professor of Marketing, and Florian Zettelmeyer is Professor of Marketing, both at the Kellogg School of Management, Northwestern University. Neha Bhargava is Manager, Ads Research, and Dan Chapsky is Manager, Ads Research, both at Facebook.
Note: No data contained personally identifiable information that could identify consumers or advertisers to maintain privacy.
We thank Daniel Slotwiner, Gabrielle Gibbs, Joseph Davin, Brian d’Alessandro, and Fangfang Tan at Facebook. We are grateful to Garrett Johnson, Randall Lewis, and seminar participants at Bocconi, CKGSB, Columbia, eBay, ESMT, Facebook, HBS, LBS, Northwestern, Temple, UC Berkeley, UCL, NBER Digitization, NYU Big Data Conference, and ZEW for helpful comments and suggestions. We particularly thank Meghan Busse for extensive comments and editing suggestions. Gordon and Zettelmeyer have no financial interest in Facebook and were not compensated in any way by Facebook or its affiliated companies for engaging in this research.
- Corporate: FREE
- Academic: FREE
- Subscribers: FREE
- Public: $18.00
3 WAYS to GET CONNECTED
Employees of MSI Member Companies enjoy the benefits of complete online access to content, member conferences and networking with the MSI community.
Qualified academics benefit from a relationship with MSI through access to msi.org, conferences and research opportunities.
The public is invited to enjoy partial access to msi.org content, a free e-newsletter, selected reports and more.