Enhancing local context of histology features in vision transformers

Predicting complete response to radiotherapy in rectal cancer patients using deep learning approaches from morphological features extracted from histology biopsies provides a quick, low-cost and effective way to assist clinical decision making. We propose adjustments to the Vision Transformer (ViT)...

Full description

Bibliographic Details
Main Authors: Wood, R, Sirinukunwattana, K, Domingo, E, Sauer, A, Lafarge, M, Koelzer, V, Maughan, T, Rittscher, J
Format: Conference item
Language:English
Published: Springer 2022
_version_ 1826309007773532160
author Wood, R
Sirinukunwattana, K
Domingo, E
Sauer, A
Lafarge, M
Koelzer, V
Maughan, T
Rittscher, J
author_facet Wood, R
Sirinukunwattana, K
Domingo, E
Sauer, A
Lafarge, M
Koelzer, V
Maughan, T
Rittscher, J
author_sort Wood, R
collection OXFORD
description Predicting complete response to radiotherapy in rectal cancer patients using deep learning approaches from morphological features extracted from histology biopsies provides a quick, low-cost and effective way to assist clinical decision making. We propose adjustments to the Vision Transformer (ViT) network to improve the utilisation of contextual information present in whole slide images (WSIs). Firstly, our position restoration embedding (PRE) preserves the spatial relationship between tissue patches, using their original positions on a WSI. Secondly, a clustering analysis of extracted tissue features explores morphological motifs which capture fundamental biological processes found in the tumour micro-environment. This is introduced into the ViT network in the form of a cluster label token, helping the model to differentiate between tissue types. The proposed methods are demonstrated on two large independent rectal cancer datasets of patients selectively treated with radiotherapy and capecitabine in two UK clinical trials. Experiments demonstrate that both models, PREViT and ClusterViT, show improvements in the prediction over baseline models
first_indexed 2024-03-07T07:27:48Z
format Conference item
id oxford-uuid:b695cf6b-040a-4d53-a352-f29d876c9506
institution University of Oxford
language English
last_indexed 2024-03-07T07:27:48Z
publishDate 2022
publisher Springer
record_format dspace
spelling oxford-uuid:b695cf6b-040a-4d53-a352-f29d876c95062022-11-25T11:51:19ZEnhancing local context of histology features in vision transformersConference itemhttp://purl.org/coar/resource_type/c_5794uuid:b695cf6b-040a-4d53-a352-f29d876c9506EnglishSymplectic ElementsSpringer2022Wood, RSirinukunwattana, KDomingo, ESauer, ALafarge, MKoelzer, VMaughan, TRittscher, J Predicting complete response to radiotherapy in rectal cancer patients using deep learning approaches from morphological features extracted from histology biopsies provides a quick, low-cost and effective way to assist clinical decision making. We propose adjustments to the Vision Transformer (ViT) network to improve the utilisation of contextual information present in whole slide images (WSIs). Firstly, our position restoration embedding (PRE) preserves the spatial relationship between tissue patches, using their original positions on a WSI. Secondly, a clustering analysis of extracted tissue features explores morphological motifs which capture fundamental biological processes found in the tumour micro-environment. This is introduced into the ViT network in the form of a cluster label token, helping the model to differentiate between tissue types. The proposed methods are demonstrated on two large independent rectal cancer datasets of patients selectively treated with radiotherapy and capecitabine in two UK clinical trials. Experiments demonstrate that both models, PREViT and ClusterViT, show improvements in the prediction over baseline models
spellingShingle Wood, R
Sirinukunwattana, K
Domingo, E
Sauer, A
Lafarge, M
Koelzer, V
Maughan, T
Rittscher, J
Enhancing local context of histology features in vision transformers
title Enhancing local context of histology features in vision transformers
title_full Enhancing local context of histology features in vision transformers
title_fullStr Enhancing local context of histology features in vision transformers
title_full_unstemmed Enhancing local context of histology features in vision transformers
title_short Enhancing local context of histology features in vision transformers
title_sort enhancing local context of histology features in vision transformers
work_keys_str_mv AT woodr enhancinglocalcontextofhistologyfeaturesinvisiontransformers
AT sirinukunwattanak enhancinglocalcontextofhistologyfeaturesinvisiontransformers
AT domingoe enhancinglocalcontextofhistologyfeaturesinvisiontransformers
AT sauera enhancinglocalcontextofhistologyfeaturesinvisiontransformers
AT lafargem enhancinglocalcontextofhistologyfeaturesinvisiontransformers
AT koelzerv enhancinglocalcontextofhistologyfeaturesinvisiontransformers
AT maughant enhancinglocalcontextofhistologyfeaturesinvisiontransformers
AT rittscherj enhancinglocalcontextofhistologyfeaturesinvisiontransformers