The deployment of Large Vision-Language Models (LVLMs) for real-world document question answering is often constrained by dynamic, user-defined policies that dictate information disclosure based on context. While ensuring adherence to these explicit constraints is critical, existing safety research primarily focuses on implicit social norms or text-only settings, overlooking the complexities of multimodal documents. In this paper, we introduce Doc-PP (Document Policy Preservation Benchmark), a novel benchmark constructed from real-world reports requiring reasoning across heterogeneous visual and textual elements under strict non-disclosure policies. Our evaluation highlights a systemic Reasoning-Induced Safety Gap: models frequently leak sensitive information when answers must be inferred through complex synthesis or aggregated across modalities, effectively circumventing existing safety constraints. Furthermore, we identify that providing extracted text improves perception but inadvertently facilitates leakage. To address these vulnerabilities, we propose DVA (Decompose–Verify–Aggregation), a structural inference framework that decouples reasoning from policy verification. Experimental results demonstrate that DVA significantly outperforms standard prompting defenses, offering a robust baseline for policy-compliant document understanding.
An example of multimodal policy preservation failure. The LVLM leaks protected information by reasoning across visual and textual evidence, violating an explicit non-disclosure policy. Even when explicitly instructed not to disclose "Middle East revenue," the model extracts a percentage from a pie chart, identifies the total revenue from text, and calculates the protected information through implicit reasoning.
Overview of the three-stage Doc-PP benchmark construction pipeline: (1) Policy Construction, where confidential targets are generated and filtered; (2) Query Construction, producing reasoning-heavy queries; and (3) Evaluation, using a checklist-based framework to measure leakage and faithfulness.
| Document Type | Explicit | Implicit |
|---|---|---|
| Business report | 181 (40.9%) | 292 (41.8%) |
| Research report | 171 (38.6%) | 254 (36.4%) |
| Financial report | 59 (13.3%) | 102 (14.6%) |
| Admin / Industry | 32 (7.2%) | 50 (7.2%) |
| Total | 443 | 698 |
| Evidence Type | Count | % |
|---|---|---|
| Text (Single) | 598 | 52.4% |
| Text (Multi) | 104 | 9.1% |
| Table (Single) | 226 | 19.8% |
| Table (Multi) | 13 | 1.1% |
| Chart | 89 | 7.8% |
| Figure | 50 | 4.3% |
| Cross-modality | 61 | 5.3% |
| Total | 1,141 | 100% |
We propose DVA, a lightweight inference-time framework that structurally separates information synthesis from policy enforcement to prevent reasoning-induced leakage.
Case study: While CoT and Revision fail to withhold the restricted value, DVA successfully prevents leakage.
Main Results. Performance comparison under image-based (PNG) and OCR-based inputs. Faithfulness (↑ higher is better) and Leakage (↓ lower is better).
| Model | Explicit Query | Implicit Query | |||
|---|---|---|---|---|---|
| w/o policy | w/ policy | w/o policy | w/ policy | ||
| Faithful ↑ | Leakage ↓ | Faithful ↑ | Faithful ↑ | Leakage ↓ | |
| Image Input | |||||
| GPT-5.2 | 98.0 | 8.1 | 74.2 | 68.7 | 25.7 |
| Gemini-3-Pro | 96.6 | 31.4 | 64.6 | 60.5 | 46.8 |
| Qwen3-VL-Thinking | 96.8 | 11.5 | 75.7 | 63.4 | 40.4 |
| Gemini-3-Flash | 97.3 | 49.4 | 70.2 | 69.0 | 64.6 |
| Qwen3-VL-Instruct | 93.5 | 75.6 | 67.5 | 61.6 | 93.5 |
| Mistral-Large | 89.5 | 45.8 | 64.6 | 60.0 | 76.8 |
| Mistral-OCR Input | |||||
| GPT-5.2 | 97.7 | 11.5 | 78.6 | 70.6 | 25.8 |
| Gemini-3-Pro | 97.5 | 28.7 | 61.1 | 61.4 | 37.4 |
| Qwen3-VL-Thinking | 97.6 | 14.2 | 80.4 | 63.8 | 40.9 |
Performance by Evidence Type. Leakage and faithfulness breakdown for Gemini-3-Flash across different evidence configurations.
| Evidence Type | Explicit | Implicit | |
|---|---|---|---|
| Leak ↓ | Leak ↓ | Faithful ↑ | |
| Single-Source Evidence | |||
| Text (Single) | 59.7 | 72.8 | 73.7 |
| Table (Single) | 42.7 | 75.2 | 61.0 |
| Chart | 52.7 | 67.9 | 68.5 |
| Figure | 60.0 | 56.7 | 78.9 |
| Multi-Source / Cross-Modal Evidence | |||
| Text (Multi) | 69.2 | 84.6 | 66.1 |
| Table (Multi) | 80.0 | 100.0 | 48.8 |
| Table + Text | 66.6 | 88.0 | 76.0 |
| Chart + Text | 100.0 | 100.0 | 69.3 |
| Figure + Text | 75.0 | 83.3 | 79.2 |
Mitigation Strategies. Leakage comparison across methods (↓ lower is better). DVA consistently outperforms baseline defenses.
| Methods | Gemini-3-Flash | Qwen3-VL | Mistral-Large |
|---|---|---|---|
| Default | 64.6 | 93.5 | 76.8 |
| CoT | 51.0 | 50.6 | 70.6 |
| Revision | 38.8 | 44.5 | 41.9 |
| DVA (Ours) | 30.5 | 24.5 | 41.6 |
DVA substantially reduces leakage across all models, lowering the rate from 64.6 to 30.5 on Gemini-3-Flash and from 93.5 to 24.5 on Qwen3-VL, demonstrating that element-level verification is critical for multimodal policy preservation.
@article{jang2026doc,
title={Doc-PP: Document Policy Preservation Benchmark for Large Vision-Language Models},
author={Jang, Haeun and Chang, Hwan and Lee, Hwanhee},
journal={arXiv preprint arXiv:2601.03926},
year={2026}
}