Abstract
Contextual advertising involves matching features of ads to features of the media context where they appear. The authors propose AdGazer, a new machine learning procedure to support contextual advertising. It comprises a theoretical framework organizing high- and low-level features of ads and contexts, feature engineering models grounded in this framework, an XGBoost model predicting ad and brand attention, and an algorithm optimally assigning ads to contexts. AdGazer includes a multimodal large language model to extract high-level topics predicting the ad–context match. This research uses a unique eye-tracking database containing 3,531 digital display ads and their contexts, and aggregate ad and brand gaze times. The authors compare AdGazer’s predictive performance with that of two feature learning models, VGG16 and ResNet50. AdGazer predicts highly accurately with holdout correlations of .83 for ad gaze and .80 for brand gaze, outperforming both feature learning models and generalizing better to out-of-distribution ads. Context features jointly contributed at least 33% to predicted ad gaze and about 20% to predicted brand gaze, good news for managers practicing or considering contextual advertising. The authors demonstrate that the theory-informed AdGazer effectively matches ads to advertising vehicles and their contexts, optimizing ad gaze more than current practice and alternatives like text-based and native contextual advertising.
Get full access to this article
View all access options for this article.
