OfficeQA Pro: An Enterprise Benchmark for End-to-End Grounded Reasoning
Abstract
OfficeQA Pro evaluates AI agents on multi-document reasoning across historical financial documents, revealing persistent challenges in grounded reasoning despite advanced model capabilities.
We introduce OfficeQA Pro, a benchmark for evaluating AI agents on grounded, multi-document reasoning over a large and heterogeneous document corpus. The corpus consists of U.S. Treasury Bulletins spanning nearly 100 years, comprising 89,000 pages and over 26 million numerical values. OfficeQA Pro consists of 133 questions that require precise document parsing, retrieval, and analytical reasoning across both unstructured text and tabular data. Frontier LLMs including Claude Opus 4.6, GPT-5.4, and Gemini 3.1 Pro Preview achieve less than 5% accuracy on OfficeQA Pro when relying on parametric knowledge, and less than 12% with additional access to the web. When provided directly with the document corpus, frontier agents still struggle on over half of questions, scoring 34.1% on average. We find that providing agents with a structured document representation produced by Databricks' ai_parse_document yields a 16.1% average relative performance gain across agents. We conduct additional ablations to study the effects of model selection, table representation, retrieval strategy, and test-time scaling on performance. Despite these improvements, significant headroom remains before agents can be considered reliable at enterprise-grade grounded reasoning.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- $\tau$-Knowledge: Evaluating Conversational Agents over Unstructured Knowledge (2026)
- SPD-RAG: Sub-Agent Per Document Retrieval-Augmented Generation (2026)
- BRIDGE: Benchmark for multi-hop Reasoning In long multimodal Documents with Grounded Evidence (2026)
- CorpusQA: A 10 Million Token Benchmark for Corpus-Level Analysis and Reasoning (2026)
- SAGE: Benchmarking and Improving Retrieval for Deep Research Agents (2026)
- Proof of Time: A Benchmark for Evaluating Scientific Idea Judgments (2026)
- Structured Context Engineering for File-Native Agentic Systems: Evaluating Schema Accuracy, Format Effectiveness, and Multi-File Navigation at Scale (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper