Multimodal RewardBench 2: Evaluating Omni Reward Models for Interleaved Text and Image Paper • 2512.16899 • Published 7 days ago • 12
BIRD: A Trustworthy Bayesian Inference Framework for Large Language Models Paper • 2404.12494 • Published Apr 18, 2024
Rethinking LLM Uncertainty: A Multi-Agent Approach to Estimating Black-Box Model Uncertainty Paper • 2412.09572 • Published Dec 12, 2024
BLIP3-o: A Family of Fully Open Unified Multimodal Models-Architecture, Training and Dataset Paper • 2505.09568 • Published May 14 • 98
ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding Paper • 2501.05452 • Published Jan 9 • 15
Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense? Paper • 2406.07546 • Published Jun 11, 2024 • 9
MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding Paper • 2406.09411 • Published Jun 13, 2024 • 19
Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models Paper • 2406.09403 • Published Jun 13, 2024 • 23
BLINK: Multimodal Large Language Models Can See but Not Perceive Paper • 2404.12390 • Published Apr 18, 2024 • 26
ImagenHub: Standardizing the evaluation of conditional image generation models Paper • 2310.01596 • Published Oct 2, 2023 • 19
BLINK: Multimodal Large Language Models Can See but Not Perceive Paper • 2404.12390 • Published Apr 18, 2024 • 26
Visual Program Distillation: Distilling Tools and Programmatic Reasoning into Vision-Language Models Paper • 2312.03052 • Published Dec 5, 2023
TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering Paper • 2303.11897 • Published Mar 21, 2023
Training Language Models to Generate Text with Citations via Fine-grained Rewards Paper • 2402.04315 • Published Feb 6, 2024
One Embedder, Any Task: Instruction-Finetuned Text Embeddings Paper • 2212.09741 • Published Dec 19, 2022 • 4