Multimodal RewardBench 2: Evaluating Omni Reward Models for Interleaved Text and Image Paper • 2512.16899 • Published 12 days ago • 12
LYNX: Learning Dynamic Exits for Confidence-Controlled Reasoning Paper • 2512.05325 • Published 25 days ago • 2
When Do Transformers Learn Heuristics for Graph Connectivity? Paper • 2510.19753 • Published Oct 22 • 3
Textual Steering Vectors Can Improve Visual Understanding in Multimodal Large Language Models Paper • 2505.14071 • Published May 20 • 1
Textual Steering Vectors Can Improve Visual Understanding in Multimodal Large Language Models Paper • 2505.14071 • Published May 20 • 1
Zebra-CoT: A Dataset for Interleaved Vision Language Reasoning Paper • 2507.16746 • Published Jul 22 • 35
AION-1: Omnimodal Foundation Model for Astronomical Sciences Paper • 2510.17960 • Published Oct 20 • 29
Tree-based Dialogue Reinforced Policy Optimization for Red-Teaming Attacks Paper • 2510.02286 • Published Oct 2 • 28
Zebra-CoT: A Dataset for Interleaved Vision Language Reasoning Paper • 2507.16746 • Published Jul 22 • 35
Textual Steering Vectors Can Improve Visual Understanding in Multimodal Large Language Models Paper • 2505.14071 • Published May 20 • 1
Sample Efficient Preference Alignment in LLMs via Active Exploration Paper • 2312.00267 • Published Dec 1, 2023
Textual Steering Vectors Can Improve Visual Understanding in Multimodal Large Language Models Paper • 2505.14071 • Published May 20 • 1
BLIP3-o: A Family of Fully Open Unified Multimodal Models-Architecture, Training and Dataset Paper • 2505.09568 • Published May 14 • 98
LLM360 K2: Building a 65B 360-Open-Source Large Language Model from Scratch Paper • 2501.07124 • Published Jan 13