ChainMPQ: Interleaved Text-Image Reasoning Chains for Mitigating Relation Hallucinations

The University of Queensland
ICLR 2026

Framework Overview

Overview of ChainMPQ Framework

Abstract

While Large Vision-Language Models (LVLMs) achieve strong performance in multimodal tasks, hallucinations continue to affect their reliability. Among the three categories of hallucinations, which include object, attribute, and relation, relation hallucinations account for the largest proportion but have received the least attention. To address this challenge, we propose ChainMPQ (Multi-Perspective Questions guided Interleaved Text-image Reasoning Chain), a training-free method that improves relational inference in LVLMs by utilizing accumulated textual and visual memories. ChainMPQ first extracts subject and object keywords from the question to enhance the corresponding image regions. It then constructs multi-perspective questions that focus on the three core components of a relationship: the subject, the object, and the relation that links them. These questions are sequentially input to the model, with textual and visual memories from earlier steps providing supporting context for subsequent ones, thereby forming an interleaved chain of image and text that guides progressive relational reasoning. Experiments on multiple LVLMs and benchmarks show that ChainMPQ substantially reduces relation hallucinations, while ablation studies further validate the effectiveness of its three core modules.

BibTeX

@misc{wu2025chainmpqinterleavedtextimagereasoning,
      title={ChainMPQ: Interleaved Text-Image Reasoning Chains for Mitigating Relation Hallucinations}, 
      author={Yike Wu and Yiwei Wang and Yujun Cai},
      year={2025},
      eprint={2510.06292},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2510.06292}, 
}