AcademiClaw: When Students Set Challenges for AI Agents
Abstract
AcademiClaw presents a comprehensive benchmark for evaluating AI agents on complex academic tasks spanning multiple domains, revealing significant capability gaps in current models.
Benchmarks within the OpenClaw ecosystem have thus far evaluated exclusively assistant-level tasks, leaving the academic-level capabilities of OpenClaw largely unexamined. We introduce AcademiClaw, a bilingual benchmark of 80 complex, long-horizon tasks sourced directly from university students' real academic workflows -- homework, research projects, competitions, and personal projects -- that they found current AI agents unable to solve effectively. Curated from 230 student-submitted candidates through rigorous expert review, the final task set spans 25+ professional domains, ranging from olympiad-level mathematics and linguistics problems to GPU-intensive reinforcement learning and full-stack system debugging, with 16 tasks requiring CUDA GPU execution. Each task executes in an isolated Docker sandbox and is scored on task completion by multi-dimensional rubrics combining six complementary techniques, with an independent five-category safety audit providing additional behavioral analysis. Experiments on six frontier models show that even the best achieves only a 55\% pass rate. Further analysis uncovers sharp capability boundaries across task domains, divergent behavioral strategies among models, and a disconnect between token consumption and output quality, providing fine-grained diagnostic signals beyond what aggregate metrics reveal. We hope that AcademiClaw and its open-sourced data and code can serve as a useful resource for the OpenClaw community, driving progress toward agents that are more capable and versatile across the full breadth of real-world academic demands. All data and code are available at https://github.com/GAIR-NLP/AcademiClaw.
Community
that multi-layer evaluation stack in AcademiClaw is the most interesting bit for me, because it turns a single pass/fail into a landscape of diagnostics from code execution to browser end-to-end checks. i'd love to see an ablation where you drop the vision-based assessment and rely purely on runtime checks to see how much the diagnostic signal rides on perception versus reasoning. the gap between token consumption and output quality is exactly the kind of nonlinearity i worry about when optimizing for cost, and i wonder if reweighting the rubric toward long-horizon steps would shift models more than raw speed. the bilingual setup is neat but also invites a closer look at language-specific failure modes, especially on tasks that hinge on precise mathematical or coding phrasing. arxivlens had a solid breakdown that helped me parse the method details, and for anyone skimming, it's worth a quick check: https://arxivlens.com/PaperView/Details/academiclaw-when-students-set-challenges-for-ai-agents-1939-df19763e
Get this paper in your agent:
hf papers read 2605.02661 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper