Back to Publications

Attend What Matters: Leveraging Vision Foundational Models for Breast Cancer Classification Using Mammograms

Samyak SanghviPiyush MiglaniSarvesh ShashikumarKaustubh R BorgaviChetan Arora
IEEE ISBI 2026
Code

Abstract

Vision Transformers (ViTs) have become the architecture of choice for many computer vision tasks, yet their performance in computer-aided diagnostics remains limited. Focusing on breast cancer detection from mammograms (BCDM), we identify two main causes for this shortfall. First, medical images are high-resolution with small abnormalities, leading to an excessive number of tokens and making it difficult for the softmax-based attention to localize and attend to relevant regions. Second, medical image classification is inherently fine-grained, with low inter-class and high intra-class variability, where standard cross-entropy training is insufficient. To overcome these challenges, we propose a framework with three key components: (1) Region of interest (RoI) based token reduction using an object detection model to guide attention; (2) contrastive learning between selected RoIs to enhance fine-grained discrimination through hard-negative based training; and (3) a DINOv2 pretrained ViT that captures localization-aware, fine-grained features instead of global CLIP representations. Experiments on public mammography datasets demonstrate that our method achieves superior performance over existing baselines, establishing its effectiveness and potential clinical utility for large-scale breast cancer screening.

Attend What Matters Architecture

Figure 1: Overview of the Attend What Matters framework.

Results

MethodAUCF1R@0.1R@0.3R@0.5
Vision only
ViT-A79.041.155.071.284.3
ViT-B83.050.061.477.086.9
ViT-C78.431.143.767.282.4
MedVAE57.520.623.741.960.1
TReg-SwinT85.853.055.180.690.2
XFMamba63.618.325.251.564.6
Image-Text
MMBCD77.1275066.282.8
M-C-B585.850.865.483.589.9
Ours86.654.566.580.790.3

Table 1: Performance comparison on the VinDR dataset. ViT-A and ViT-B correspond to DINO with a linear layer at input resolutions of 448x448 and 1024x1024 respectively. ViT-C demonstrates scores using a DeIT head instead of a linear layer at 448x448 input resolution.

Citation

@inproceedings{sanghvi2026attend,
  title={Attend What Matters: Leveraging Vision Foundational Models for Breast Cancer Classification Using Mammograms},
  author={Sanghvi, Samyak and Miglani, Piyush and Shashikumar, Sarvesh and Borgavi, Kaustubh R and Arora, Chetan},
  booktitle={IEEE International Symposium on Biomedical Imaging (ISBI)},
  year={2026}
}