Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Contrastive Deep Encoding Enables Uncertainty-aware Machine-learning-assisted Histopathology

View through CrossRef
Abstract Deep neural network models can learn clinically relevant features from millions of histopathology images. However generating high-quality annotations to train such models for each hospital, each cancer type, and each diagnostic task is prohibitively laborious. On the other hand, terabytes of training data –while lacking reliable annotations– are readily available in the public domain in some cases. In this work, we explore how these large datasets can be consciously utilized to pre-train deep networks to encode informative representations. We then fine-tune our pre-trained models on a fraction of annotated training data to perform specific downstream tasks. We show that our approach can reach the current state-of-the-art (SOTA) for patch-level classification with only 1-10% randomly selected annotations compared to other SOTA approaches. Moreover, we propose an uncertainty-aware loss function, to quantify the model confidence during inference. Quantified uncertainty helps experts select the best instances to label for further training. Our uncertainty-aware labeling reaches the SOTA with significantly fewer annotations compared to random labeling. Last, we demonstrate how our pre-trained encoders can surpass current SOTA for whole-slide image classification with weak supervision. Our work lays the foundation for data and task-agnostic pre-trained deep networks with quantified uncertainty.
Title: Contrastive Deep Encoding Enables Uncertainty-aware Machine-learning-assisted Histopathology
Description:
Abstract Deep neural network models can learn clinically relevant features from millions of histopathology images.
However generating high-quality annotations to train such models for each hospital, each cancer type, and each diagnostic task is prohibitively laborious.
On the other hand, terabytes of training data –while lacking reliable annotations– are readily available in the public domain in some cases.
In this work, we explore how these large datasets can be consciously utilized to pre-train deep networks to encode informative representations.
We then fine-tune our pre-trained models on a fraction of annotated training data to perform specific downstream tasks.
We show that our approach can reach the current state-of-the-art (SOTA) for patch-level classification with only 1-10% randomly selected annotations compared to other SOTA approaches.
Moreover, we propose an uncertainty-aware loss function, to quantify the model confidence during inference.
Quantified uncertainty helps experts select the best instances to label for further training.
Our uncertainty-aware labeling reaches the SOTA with significantly fewer annotations compared to random labeling.
Last, we demonstrate how our pre-trained encoders can surpass current SOTA for whole-slide image classification with weak supervision.
Our work lays the foundation for data and task-agnostic pre-trained deep networks with quantified uncertainty.

Related Results

Selection of Injectable Drug Product Composition using Machine Learning Models (Preprint)
Selection of Injectable Drug Product Composition using Machine Learning Models (Preprint)
BACKGROUND As of July 2020, a Web of Science search of “machine learning (ML)” nested within the search of “pharmacokinetics or pharmacodynamics” yielded over 100...
Reserves Uncertainty Calculation Accounting for Parameter Uncertainty
Reserves Uncertainty Calculation Accounting for Parameter Uncertainty
Abstract An important goal of geostatistical modeling is to assess output uncertainty after processing realizations through a transfer function, in particular, to...
Temporal-Aware and Intent Contrastive Learning for Sequential Recommendation
Temporal-Aware and Intent Contrastive Learning for Sequential Recommendation
In recent years, research in sequential recommendation has primarily refined user intent by constructing sequence-level contrastive learning tasks through data augmentation or by e...
Transcriptomics extract the key chromium resistance genes of Cellulomonas
Transcriptomics extract the key chromium resistance genes of Cellulomonas
Abstract Cellulomonas fimi Clb-11 can reduce high toxic Cr (VI) to low toxic Cr (III). In this study, transcriptomics was used to analyze the key genes, which was involved ...
Contrastive Distillation Learning with Sparse Spatial Aggregation
Contrastive Distillation Learning with Sparse Spatial Aggregation
Abstract Contrastive learning has advanced significantly and demonstrates excellent transfer learning capabilities. Knowledge distillation is one of the most effective meth...
Improving Neural Retrieval with Contrastive Learning
Improving Neural Retrieval with Contrastive Learning
In recent years, neural retrieval models have shown remarkable progress in improving the efficiency and accuracy of information retrieval systems. However, challenges remain in eff...
Sampling Space of Uncertainty Through Stochastic Modelling of Geological Facies
Sampling Space of Uncertainty Through Stochastic Modelling of Geological Facies
Abstract The way the space of uncertainty should be sampled from reservoir models is an essential point for discussion that can have a major impact on the assessm...
Deep convolutional neural network and IoT technology for healthcare
Deep convolutional neural network and IoT technology for healthcare
Background Deep Learning is an AI technology that trains computers to analyze data in an approach similar to the human brain. Deep learning algorithms can find complex patterns in ...

Back to Top