Bandit Algorithms for Advertising Optimization: A Comparative Study

In recent years, the rapid development of digital advertising has challenged advertisers to make optimal choices among multiple options quickly. This is crucial for increasing user engagement and return on investment. However, traditional A/B testing often suffers from slow response times and diffic...

Full description

Bibliographic Details
Main Author: Tian Ziyue
Format: Article
Language:English
Published: EDP Sciences 2025-01-01
Series:ITM Web of Conferences
Online Access:https://www.itm-conferences.org/articles/itmconf/pdf/2025/04/itmconf_iwadi2024_01019.pdf
Description
Summary:In recent years, the rapid development of digital advertising has challenged advertisers to make optimal choices among multiple options quickly. This is crucial for increasing user engagement and return on investment. However, traditional A/B testing often suffers from slow response times and difficulties in adapting to dynamic environments, leading to limited effectiveness. This article explores the application of multi-armed bandit algorithms in digital advertising, focusing on the performance of ε- greedy, Upper Confidence Bound (UCB), Linear Upper Confidence Bound (LinUCB), and Softmax algorithms. Experimental results show that incorporating user characteristics can significantly improve the accuracy and relevance of advertising recommendations. Among the algorithms tested, LinUCB, which utilizes contextual information, outperformed the other three non-contextual algorithms in terms of cumulative return and accuracy. It demonstrated significant advantages after full exploration. However, a limitation of this study is that the static experimental dataset cannot fully simulate the dynamic feedback of real-time advertising environments. This limitation affects the generalizability of the findings in highly changing contexts. Future research should focus on developing algorithms that can address feedback delays and are both adaptive and context-sensitive. This would enhance performance in complex advertising situations. Overall, this study deepens the understanding of multi-armed bandit algorithms in advertising and provides strategic guidance for user-targeted advertising.
ISSN:2271-2097