Random Sequential Encoders for Private Data Release in NLP

There are many scenarios that motivate data owners to outsource the training of machine learning models on their data to external model developers. While doing so, it is of data owners’ best interests to keep their data private - meaning that no third party, including the model developer, can learn...

Full description

Bibliographic Details
Main Author: Jaba, Andrea
Other Authors: Medard, Muriel
Format: Thesis
Published: Massachusetts Institute of Technology 2022
Online Access:https://hdl.handle.net/1721.1/144874
Description
Summary:There are many scenarios that motivate data owners to outsource the training of machine learning models on their data to external model developers. While doing so, it is of data owners’ best interests to keep their data private - meaning that no third party, including the model developer, can learn anything more about their data than the labels associated with the machine learning task, which is difficult to guarantee while maintaining the model utility of said task. In computer vision, lightweight random convolutional networks have shown potential to be an encoder that balances privacy and utility. This thesis takes a novel exploration of random sequential encoders - (1) random recurrent neural networks and (2) random long short-term memory networks as encoding schemes for private data release in natural language processing. Experiments were conducted to evaluate the utility and privacy of these encoders against known baseline encoding schemes with less privacy: (1) not using an encoder and (2) random linear encoder. For the private release of a spam classification dataset, the usage of random long short-term memory networks as encoders maintained the most utility among all random encoders, while being relatively robust to the privacy attacks this thesis considers, and signals a promising direction for future experiments.