Multiple Instance Learning

233 papers with code • 0 benchmarks • 8 datasets

Multiple Instance Learning is a type of weakly supervised learning algorithm where training data is arranged in bags, where each bag contains a set of instances $X=\{x_1,x_2, \ldots,x_M\}$, and there is one single label $Y$ per bag, $Y\in\{0, 1\}$ in the case of a binary classification problem. It is assumed that individual labels $y_1, y_2,\ldots, y_M$ exist for the instances within a bag, but they are unknown during training. In the standard Multiple Instance assumption, a bag is considered negative if all its instances are negative. On the other hand, a bag is positive, if at least one instance in the bag is positive.

Source: Monte-Carlo Sampling applied to Multiple Instance Learning for Histological Image Classification

Libraries

Use these libraries to find Multiple Instance Learning models and implementations

Latest papers with no code

Semantics-Aware Attention Guidance for Diagnosing Whole Slide Images

no code yet • 16 Apr 2024

Accurate cancer diagnosis remains a critical challenge in digital pathology, largely due to the gigapixel size and complex spatial relationships present in whole slide images.

FRACTAL: Fine-Grained Scoring from Aggregate Text Labels

no code yet • 7 Apr 2024

Large language models (LLMs) are being increasingly tuned to power complex generation tasks such as writing, fact-seeking, querying and reasoning.

Finding Regions of Interest in Whole Slide Images Using Multiple Instance Learning

no code yet • 1 Apr 2024

Whole Slide Images (WSI), obtained by high-resolution digital scanning of microscope slides at multiple scales, are the cornerstone of modern Digital Pathology.

MonoBox: Tightness-free Box-supervised Polyp Segmentation using Monotonicity Constraint

no code yet • 1 Apr 2024

We propose MonoBox, an innovative box-supervised segmentation method constrained by monotonicity to liberate its training from the user-unfriendly box-tightness assumption.

Benchmarking Image Transformers for Prostate Cancer Detection from Ultrasound Data

no code yet • 27 Mar 2024

In this work, we present a detailed study of several image transformer architectures for both ROI-scale and multi-scale classification, and a comparison of the performance of CNNs and transformers for ultrasound-based prostate cancer classification.

Integrative Graph-Transformer Framework for Histopathology Whole Slide Image Representation and Classification

no code yet • 26 Mar 2024

In digital pathology, the multiple instance learning (MIL) strategy is widely used in the weakly supervised histopathology whole slide image (WSI) classification task where giga-pixel WSIs are only labeled at the slide level.

Integrating multiscale topology in digital pathology with pyramidal graph convolutional networks

no code yet • 22 Mar 2024

The architecture's unique configuration allows for the concurrent modeling of structural patterns at lower magnifications and detailed cellular features at higher ones, while also quantifying the contribution of each magnification level to the prediction.

Towards Efficient Information Fusion: Concentric Dual Fusion Attention Based Multiple Instance Learning for Whole Slide Images

no code yet • 21 Mar 2024

In the realm of digital pathology, multi-magnification Multiple Instance Learning (multi-mag MIL) has proven effective in leveraging the hierarchical structure of Whole Slide Images (WSIs) to reduce information loss and redundant data.

Prompt-Guided Adaptive Model Transformation for Whole Slide Image Classification

no code yet • 19 Mar 2024

To address this issue, we propose PAMT, a novel Prompt-guided Adaptive Model Transformation framework that enhances MIL classification performance by seamlessly adapting pre-trained models to the specific characteristics of histopathology data.

Siamese Learning with Joint Alignment and Regression for Weakly-Supervised Video Paragraph Grounding

no code yet • 18 Mar 2024

Different from previous weakly-supervised grounding frameworks based on multiple instance learning or reconstruction learning for two-stage candidate ranking, we propose a novel siamese learning framework that jointly learns the cross-modal feature alignment and temporal coordinate regression without timestamp labels to achieve concise one-stage localization for WSVPG.