Efficient Whole Slide Pathology VQA via Token Compression
In this project, we propose the very first MLLM architecture, named Token Compression Pathology LLaVA (TCP-LLaVA) to perform WSI VQA via token compression.
Abstract
Whole-slide images (WSIs) in pathology can reach up
to 100,000 ×100,000 pixels, posing significant challenges
for multimodal large language model (MLLM) due to long
context length and high computational demands. Previous
methods typically focus on patch-level analysis or slide-level classification using CLIP-based models with multi-
instance learning, but they lack the generative capabilities
needed for visual question answering (VQA). More recent
MLLM-based approaches address VQA by feeding thousands of patch tokens directly into the language model,
which leads to excessive resource consumption. To address
these limitations, we propose Token Compression Pathology LLaVA (TCP-LLaVA), the first MLLM architecture to
perform WSI VQA via token compression. TCP-LLaVA introduces a set of trainable compression tokens that aggregate visual and textual information through a modality compression module, inspired by the [CLS] token mechanism in BERT.