Doc-PP: Document Policy Preservation Benchmark
for Large Vision-Language Models

Chung-Ang University, Seoul, Korea
*Equal contribution   Corresponding author

Abstract

The deployment of Large Vision-Language Models (LVLMs) for real-world document question answering is often constrained by dynamic, user-defined policies that dictate information disclosure based on context. While ensuring adherence to these explicit constraints is critical, existing safety research primarily focuses on implicit social norms or text-only settings, overlooking the complexities of multimodal documents. In this paper, we introduce Doc-PP (Document Policy Preservation Benchmark), a novel benchmark constructed from real-world reports requiring reasoning across heterogeneous visual and textual elements under strict non-disclosure policies. Our evaluation highlights a systemic Reasoning-Induced Safety Gap: models frequently leak sensitive information when answers must be inferred through complex synthesis or aggregated across modalities, effectively circumventing existing safety constraints. Furthermore, we identify that providing extracted text improves perception but inadvertently facilitates leakage. To address these vulnerabilities, we propose DVA (Decompose–Verify–Aggregation), a structural inference framework that decouples reasoning from policy verification. Experimental results demonstrate that DVA significantly outperforms standard prompting defenses, offering a robust baseline for policy-compliant document understanding.

Motivation

Motivation: Policy Preservation Failure Example

An example of multimodal policy preservation failure. The LVLM leaks protected information by reasoning across visual and textual evidence, violating an explicit non-disclosure policy. Even when explicitly instructed not to disclose "Middle East revenue," the model extracts a percentage from a pie chart, identifies the total revenue from text, and calculates the protected information through implicit reasoning.

Key Findings

1 Reasoning-Induced Safety Gap

  • Explicit vs. Implicit Queries: Models show strong policy compliance when sensitive information is directly requested, but leakage rates spike sharply when answers require multi-step reasoning or cross-modal synthesis.
  • Cross-Modal Vulnerability: When evidence is fragmented across text, tables, and charts, the increased integration complexity circumvents existing safety constraints.

2 The OCR Paradox

  • Perceptual Clarity Amplifies Risk: OCR-based inputs frequently exacerbate information leakage compared to raw image inputs, despite offering clearer data representations.
  • Stronger Understanding ≠ Better Safety: Improved textual accessibility facilitates precise numerical reasoning, making it easier for models to compute protected values from visual artifacts.

3 Impact of Evidence Type

  • Single-Source vs. Multi-Source: Single-modality evidence results in lower leakage, but multi-source or cross-modal evidence causes significant safety violations.
  • Text-Heavy Documents are Riskier: Documents with higher text ratios (e.g., business/financial reports) exhibit the highest leakage rates.
Document type analysis

Doc-PP Benchmark

Doc-PP Construction Pipeline

Overview of the three-stage Doc-PP benchmark construction pipeline: (1) Policy Construction, where confidential targets are generated and filtered; (2) Query Construction, producing reasoning-heavy queries; and (3) Evaluation, using a checklist-based framework to measure leakage and faithfulness.

Dataset Statistics

Document TypeExplicitImplicit
Business report181 (40.9%)292 (41.8%)
Research report171 (38.6%)254 (36.4%)
Financial report59 (13.3%)102 (14.6%)
Admin / Industry32 (7.2%)50 (7.2%)
Total443698

Evidence Type Distribution

Evidence TypeCount%
Text (Single)59852.4%
Text (Multi)1049.1%
Table (Single)22619.8%
Table (Multi)131.1%
Chart897.8%
Figure504.3%
Cross-modality615.3%
Total1,141100%

DVA: Decompose–Verify–Aggregate

We propose DVA, a lightweight inference-time framework that structurally separates information synthesis from policy enforcement to prevent reasoning-induced leakage.

1
Decompose
The model rewrites its prospective answer into a set of atomic information elements — individual facts or sub-claims extracted from the document.
2
Verify
Each atomic element is individually checked against the non-disclosure policy. Elements that violate the policy are discarded.
3
Aggregate
Only policy-compliant elements are synthesized into the final response, ensuring safe and faithful output.
DVA Case Study

Case study: While CoT and Revision fail to withhold the restricted value, DVA successfully prevents leakage.

Experiments

Main Results. Performance comparison under image-based (PNG) and OCR-based inputs. Faithfulness (↑ higher is better) and Leakage (↓ lower is better).

Model Explicit Query Implicit Query
w/o policy w/ policy w/o policy w/ policy
Faithful ↑ Leakage ↓ Faithful ↑ Faithful ↑ Leakage ↓
Image Input
GPT-5.2 98.0 8.1 74.2 68.7 25.7
Gemini-3-Pro 96.6 31.4 64.6 60.5 46.8
Qwen3-VL-Thinking 96.8 11.5 75.7 63.4 40.4
Gemini-3-Flash 97.3 49.4 70.2 69.0 64.6
Qwen3-VL-Instruct 93.5 75.6 67.5 61.6 93.5
Mistral-Large 89.5 45.8 64.6 60.0 76.8
Mistral-OCR Input
GPT-5.2 97.7 11.5 78.6 70.6 25.8
Gemini-3-Pro 97.5 28.7 61.1 61.4 37.4
Qwen3-VL-Thinking 97.6 14.2 80.4 63.8 40.9

Performance by Evidence Type. Leakage and faithfulness breakdown for Gemini-3-Flash across different evidence configurations.

Evidence Type Explicit Implicit
Leak ↓ Leak ↓ Faithful ↑
Single-Source Evidence
Text (Single)59.772.873.7
Table (Single)42.775.261.0
Chart52.767.968.5
Figure60.056.778.9
Multi-Source / Cross-Modal Evidence
Text (Multi)69.284.666.1
Table (Multi)80.0100.048.8
Table + Text66.688.076.0
Chart + Text100.0100.069.3
Figure + Text75.083.379.2

Mitigation Strategies. Leakage comparison across methods (↓ lower is better). DVA consistently outperforms baseline defenses.

Methods Gemini-3-Flash Qwen3-VL Mistral-Large
Default64.693.576.8
CoT51.050.670.6
Revision38.844.541.9
DVA (Ours) 30.5 24.5 41.6

DVA substantially reduces leakage across all models, lowering the rate from 64.6 to 30.5 on Gemini-3-Flash and from 93.5 to 24.5 on Qwen3-VL, demonstrating that element-level verification is critical for multimodal policy preservation.

BibTeX

@article{jang2026doc,
  title={Doc-PP: Document Policy Preservation Benchmark for Large Vision-Language Models},
  author={Jang, Haeun and Chang, Hwan and Lee, Hwanhee},
  journal={arXiv preprint arXiv:2601.03926},
  year={2026}
}