Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control

Selective rationalization has become a common mechanism to ensure that predictive models reveal how they use any available features. The selection may be soft or hard, and identifies a subset of input features relevant for prediction. The setup can be viewed as a cooperate game between the selector...

Full description

Bibliographic Details
Main Authors: Yu, Mo, Chang, Shiyu, Zhang, Yang, Jaakkola, Tommi S
Other Authors: MIT-IBM Watson AI Lab
Format: Article
Language:English
Published: Association for Computational Linguistics 2021
Online Access:https://hdl.handle.net/1721.1/128926
Description
Summary:Selective rationalization has become a common mechanism to ensure that predictive models reveal how they use any available features. The selection may be soft or hard, and identifies a subset of input features relevant for prediction. The setup can be viewed as a cooperate game between the selector (aka rationale generator) and the predictor making use of only the selected features. The co-operative setting may, however, be compromised for two reasons. First, the generator typically has no direct access to the outcome it aims to justify, resulting in poor performance. Second, there's typically no control exerted on the information left outside the selection. We revise the overall co-operative framework to address these challenges. We introduce an introspective model which explicitly predicts and incorporates the outcome into the selection process. Moreover, we explicitly control the rationale complement via an adversary so as not to leave any useful information out of the selection. We show that the two complementary mechanisms maintain both high predictive accuracy and lead to comprehensive rationales.