The Issue of Proxies and Choice Architectures. Why EU Law Matters for Recommender Systems

Recommendations are meant to increase sales or ad revenue, as these are the first priority of those who pay for them. As recommender systems match their recommendations with inferred preferences, we should not be surprised if the algorithm optimizes for lucrative preferences and thus co-produces the...

Full description

Bibliographic Details
Main Author: Mireille Hildebrandt
Format: Article
Language:English
Published: Frontiers Media S.A. 2022-04-01
Series:Frontiers in Artificial Intelligence
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/frai.2022.789076/full
_version_ 1818263582373052416
author Mireille Hildebrandt
Mireille Hildebrandt
author_facet Mireille Hildebrandt
Mireille Hildebrandt
author_sort Mireille Hildebrandt
collection DOAJ
description Recommendations are meant to increase sales or ad revenue, as these are the first priority of those who pay for them. As recommender systems match their recommendations with inferred preferences, we should not be surprised if the algorithm optimizes for lucrative preferences and thus co-produces the preferences they mine. This relates to the well-known problems of feedback loops, filter bubbles, and echo chambers. In this article, I discuss the implications of the fact that computing systems necessarily work with proxies when inferring recommendations and raise a number of questions about whether recommender systems actually do what they are claimed to do, while also analysing the often-perverse economic incentive structures that have a major impact on relevant design decisions. Finally, I will explain how the choice architectures for data controllers and providers of AI systems as foreseen in the EU's General Data Protection Regulation (GDPR), the proposed EU Digital Services Act (DSA) and the proposed EU AI Act will help to break through various vicious circles, by constraining how people may be targeted (GDPR, DSA) and by requiring documented evidence of the robustness, resilience, reliability, and the responsible design and deployment of high-risk recommender systems (AI Act).
first_indexed 2024-12-12T19:21:18Z
format Article
id doaj.art-ef112722ea1645b9a5c2f34612e79961
institution Directory Open Access Journal
issn 2624-8212
language English
last_indexed 2024-12-12T19:21:18Z
publishDate 2022-04-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Artificial Intelligence
spelling doaj.art-ef112722ea1645b9a5c2f34612e799612022-12-22T00:14:36ZengFrontiers Media S.A.Frontiers in Artificial Intelligence2624-82122022-04-01510.3389/frai.2022.789076789076The Issue of Proxies and Choice Architectures. Why EU Law Matters for Recommender SystemsMireille Hildebrandt0Mireille Hildebrandt1Institute of Computing and Information Sciences (iCIS), Science Faculty, Radboud University, Nijmegen, NetherlandsResearch Group Law Science Technology & Society (LSTS), Faculty of Law and Criminology, Vrije Universiteit Brussel, Brussels, BelgiumRecommendations are meant to increase sales or ad revenue, as these are the first priority of those who pay for them. As recommender systems match their recommendations with inferred preferences, we should not be surprised if the algorithm optimizes for lucrative preferences and thus co-produces the preferences they mine. This relates to the well-known problems of feedback loops, filter bubbles, and echo chambers. In this article, I discuss the implications of the fact that computing systems necessarily work with proxies when inferring recommendations and raise a number of questions about whether recommender systems actually do what they are claimed to do, while also analysing the often-perverse economic incentive structures that have a major impact on relevant design decisions. Finally, I will explain how the choice architectures for data controllers and providers of AI systems as foreseen in the EU's General Data Protection Regulation (GDPR), the proposed EU Digital Services Act (DSA) and the proposed EU AI Act will help to break through various vicious circles, by constraining how people may be targeted (GDPR, DSA) and by requiring documented evidence of the robustness, resilience, reliability, and the responsible design and deployment of high-risk recommender systems (AI Act).https://www.frontiersin.org/articles/10.3389/frai.2022.789076/fullmicro-targetingmachine learningbehavioral profilingpolitical economybehaviorismGoodhart effect
spellingShingle Mireille Hildebrandt
Mireille Hildebrandt
The Issue of Proxies and Choice Architectures. Why EU Law Matters for Recommender Systems
Frontiers in Artificial Intelligence
micro-targeting
machine learning
behavioral profiling
political economy
behaviorism
Goodhart effect
title The Issue of Proxies and Choice Architectures. Why EU Law Matters for Recommender Systems
title_full The Issue of Proxies and Choice Architectures. Why EU Law Matters for Recommender Systems
title_fullStr The Issue of Proxies and Choice Architectures. Why EU Law Matters for Recommender Systems
title_full_unstemmed The Issue of Proxies and Choice Architectures. Why EU Law Matters for Recommender Systems
title_short The Issue of Proxies and Choice Architectures. Why EU Law Matters for Recommender Systems
title_sort issue of proxies and choice architectures why eu law matters for recommender systems
topic micro-targeting
machine learning
behavioral profiling
political economy
behaviorism
Goodhart effect
url https://www.frontiersin.org/articles/10.3389/frai.2022.789076/full
work_keys_str_mv AT mireillehildebrandt theissueofproxiesandchoicearchitectureswhyeulawmattersforrecommendersystems
AT mireillehildebrandt theissueofproxiesandchoicearchitectureswhyeulawmattersforrecommendersystems
AT mireillehildebrandt issueofproxiesandchoicearchitectureswhyeulawmattersforrecommendersystems
AT mireillehildebrandt issueofproxiesandchoicearchitectureswhyeulawmattersforrecommendersystems