Distance-weighted Sinkhorn loss for Alzheimer’s disease classification

Summary: Traditional loss functions such as cross-entropy loss often quantify the penalty for each mis-classified training sample without adequately considering its distance from the ground truth class distribution in the feature space. Intuitively, the larger this distance is, the higher the penalt...

Full description

Bibliographic Details
Main Authors: Zexuan Wang, Qipeng Zhan, Boning Tong, Shu Yang, Bojian Hou, Heng Huang, Andrew J. Saykin, Paul M. Thompson, Christos Davatzikos, Li Shen
Format: Article
Language:English
Published: Elsevier 2024-03-01
Series:iScience
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2589004224004334
Description
Summary:Summary: Traditional loss functions such as cross-entropy loss often quantify the penalty for each mis-classified training sample without adequately considering its distance from the ground truth class distribution in the feature space. Intuitively, the larger this distance is, the higher the penalty should be. With this observation, we propose a penalty called distance-weighted Sinkhorn (DWS) loss. For each mis-classified training sample (with predicted label A and true label B), its contribution to the DWS loss positively correlates to the distance the training sample needs to travel to reach the ground truth distribution of all the A samples. We apply the DWS framework with a neural network to classify different stages of Alzheimer’s disease. Our empirical results demonstrate that the DWS framework outperforms the traditional neural network loss functions and is comparable or better to traditional machine learning methods, highlighting its potential in biomedical informatics and data science.
ISSN:2589-0042