Algorithmic Interactions With Strategic Users: Incentives, Interplay, and Impact

The societal challenges posed by machine learning algorithms are becoming increasingly important, and to effectively study them, it is crucial to incorporate the incentives and preferences of users into the design of algorithms. In many cases, algorithms are solely designed based on the platform...

Full description

Bibliographic Details
Main Author: Fallah, Alireza
Other Authors: Ozdaglar, Asuman
Format: Thesis
Published: Massachusetts Institute of Technology 2023
Online Access:https://hdl.handle.net/1721.1/152735
https://orcid.org/0000-0002-9295-704X
_version_ 1826193551696855040
author Fallah, Alireza
author2 Ozdaglar, Asuman
author_facet Ozdaglar, Asuman
Fallah, Alireza
author_sort Fallah, Alireza
collection MIT
description The societal challenges posed by machine learning algorithms are becoming increasingly important, and to effectively study them, it is crucial to incorporate the incentives and preferences of users into the design of algorithms. In many cases, algorithms are solely designed based on the platform's objectives, without taking into account the potential misalignment between the platform's goals and the interests of users. This thesis presents frameworks for studying the interactions between a platform and strategic users. The central objective of the platform is to estimate a parameter of interest by collecting users’ data. However, users, recognizing the value of their data, demand privacy guarantees or compensations in exchange for sharing their information. The thesis delves into various aspects of this problem, including the estimation task itself, the allocation of privacy guarantees, and the potential vulnerabilities of these guarantees to the platform's power. In particular, in the first part of this thesis, we formulate this question as a Bayesian-optimal mechanism design problem, in which an individual can share her data in exchange for a monetary reward but at the same time has a private heterogeneous privacy cost which we quantify using differential privacy. We consider two popular data market architectures: central and local. In both settings, we establish minimax lower bounds for the estimation error and derive (near) optimal estimators for given heterogeneous privacy loss levels for users. Next, we pose the mechanism design problem as the optimal selection of an estimator and payments that elicit truthful reporting of users' privacy sensitivities. We further develop efficient algorithmic mechanisms to solve this problem in both privacy settings. Moreover, we investigate the case that users have heterogeneous sensitivities for two types of privacy losses corresponding to local and central privacy measures. In the second part, we study a different aspect of the data market design: the optimal choice of architecture from both users' and the platform's point of view. The platform collects data from users by means of a mechanism that could partially protect users' privacy. We prove that a simple shuffling mechanism, whereby individual data is fully anonymized with some probability, is optimal from the viewpoint of users. We also develop a game-theoretic model of data sharing to study the impact of this shuffling mechanism on the platform's behavior and users' utility. In particular, we uncover an intriguing phenomenon that highlights the fragility of provided privacy guarantees: as the value of pooled data rises for users, the platform can exploit this opportunity to decrease the provided privacy guarantee, ultimately leading to reduced user welfare at equilibrium.
first_indexed 2024-09-23T09:40:55Z
format Thesis
id mit-1721.1/152735
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T09:40:55Z
publishDate 2023
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1527352023-11-03T03:55:41Z Algorithmic Interactions With Strategic Users: Incentives, Interplay, and Impact Fallah, Alireza Ozdaglar, Asuman Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science The societal challenges posed by machine learning algorithms are becoming increasingly important, and to effectively study them, it is crucial to incorporate the incentives and preferences of users into the design of algorithms. In many cases, algorithms are solely designed based on the platform's objectives, without taking into account the potential misalignment between the platform's goals and the interests of users. This thesis presents frameworks for studying the interactions between a platform and strategic users. The central objective of the platform is to estimate a parameter of interest by collecting users’ data. However, users, recognizing the value of their data, demand privacy guarantees or compensations in exchange for sharing their information. The thesis delves into various aspects of this problem, including the estimation task itself, the allocation of privacy guarantees, and the potential vulnerabilities of these guarantees to the platform's power. In particular, in the first part of this thesis, we formulate this question as a Bayesian-optimal mechanism design problem, in which an individual can share her data in exchange for a monetary reward but at the same time has a private heterogeneous privacy cost which we quantify using differential privacy. We consider two popular data market architectures: central and local. In both settings, we establish minimax lower bounds for the estimation error and derive (near) optimal estimators for given heterogeneous privacy loss levels for users. Next, we pose the mechanism design problem as the optimal selection of an estimator and payments that elicit truthful reporting of users' privacy sensitivities. We further develop efficient algorithmic mechanisms to solve this problem in both privacy settings. Moreover, we investigate the case that users have heterogeneous sensitivities for two types of privacy losses corresponding to local and central privacy measures. In the second part, we study a different aspect of the data market design: the optimal choice of architecture from both users' and the platform's point of view. The platform collects data from users by means of a mechanism that could partially protect users' privacy. We prove that a simple shuffling mechanism, whereby individual data is fully anonymized with some probability, is optimal from the viewpoint of users. We also develop a game-theoretic model of data sharing to study the impact of this shuffling mechanism on the platform's behavior and users' utility. In particular, we uncover an intriguing phenomenon that highlights the fragility of provided privacy guarantees: as the value of pooled data rises for users, the platform can exploit this opportunity to decrease the provided privacy guarantee, ultimately leading to reduced user welfare at equilibrium. Ph.D. 2023-11-02T20:12:03Z 2023-11-02T20:12:03Z 2023-09 2023-09-21T14:26:35.240Z Thesis https://hdl.handle.net/1721.1/152735 https://orcid.org/0000-0002-9295-704X In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology
spellingShingle Fallah, Alireza
Algorithmic Interactions With Strategic Users: Incentives, Interplay, and Impact
title Algorithmic Interactions With Strategic Users: Incentives, Interplay, and Impact
title_full Algorithmic Interactions With Strategic Users: Incentives, Interplay, and Impact
title_fullStr Algorithmic Interactions With Strategic Users: Incentives, Interplay, and Impact
title_full_unstemmed Algorithmic Interactions With Strategic Users: Incentives, Interplay, and Impact
title_short Algorithmic Interactions With Strategic Users: Incentives, Interplay, and Impact
title_sort algorithmic interactions with strategic users incentives interplay and impact
url https://hdl.handle.net/1721.1/152735
https://orcid.org/0000-0002-9295-704X
work_keys_str_mv AT fallahalireza algorithmicinteractionswithstrategicusersincentivesinterplayandimpact