Large Language Models
Reasoning Language Models: A Blueprint
·3562 words·17 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 ETH Zurich
Democratizing advanced reasoning in AI, this blueprint introduces a modular framework for building Reasoning Language Models (RLMs), simplifying development and enhancing accessibility.
Mobile-Agent-E: Self-Evolving Mobile Assistant for Complex Tasks
·2333 words·11 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 University of Illinois Urbana-Champaign
Mobile-Agent-E: A self-evolving mobile assistant conquering complex tasks with hierarchical agents and a novel self-evolution module, significantly outperforming prior approaches.
Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training
·4105 words·20 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Fudan University
Agent-R: A novel self-training framework enables language model agents to learn from errors by dynamically constructing training data that corrects erroneous actions, resulting in significantly improv…
PaSa: An LLM Agent for Comprehensive Academic Paper Search
·4507 words·22 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Peking University
PaSa: An LLM agent autonomously performs comprehensive academic paper searches, outperforming existing methods by efficiently combining search tools, paper reading, and citation analysis, optimized vi…
Evolving Deeper LLM Thinking
·7089 words·34 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Google DeepMind
Mind Evolution, a novel evolutionary search strategy, significantly boosts Large Language Model (LLM) problem-solving by generating, recombining, and refining candidate solutions via an LLM, outperfor…
ComplexFuncBench: Exploring Multi-Step and Constrained Function Calling under Long-Context Scenario
·3933 words·19 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Tsinghua University
ComplexFuncBench, a new benchmark, rigorously evaluates LLMs’ complex function-calling abilities across real-world scenarios involving multi-step processes, constraints, and long contexts.
Towards Large Reasoning Models: A Survey of Reinforced Reasoning with Large Language Models
·1945 words·10 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Tsinghua University
This survey paper explores the exciting new frontier of Large Reasoning Models (LRMs), focusing on how reinforcement learning and clever prompting techniques are boosting LLMs’ reasoning capabilities.
Multiple Choice Questions: Reasoning Makes Large Language Models (LLMs) More Self-Confident Even When They Are Wrong
·1926 words·10 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Nanjing University of Aeronautics and Astronautics
LLM reasoning boosts self-confidence, even when answers are wrong, highlighting limitations in current evaluation metrics.
Bridging Language Barriers in Healthcare: A Study on Arabic LLMs
·1632 words·8 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 M42 Health
Arabic LLMs struggle with medical tasks; this study reveals optimal language ratios in training data for improved performance, highlighting challenges in simply translating medical data for different …
RLHS: Mitigating Misalignment in RLHF with Hindsight Simulation
·5724 words·27 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Princeton University
RLHS, a novel alignment algorithm, leverages simulated hindsight feedback to mitigate misalignment in RLHF, significantly improving AI’s alignment with human values and goals.
URSA: Understanding and Verifying Chain-of-thought Reasoning in Multimodal Mathematics
·5517 words·26 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Tsinghua University
URSA-7B: A new multimodal model significantly improves chain-of-thought reasoning in mathematics!
rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking
·3910 words·19 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Microsoft Research
Small language models can master complex math reasoning using self-evolved deep thinking via Monte Carlo Tree Search, surpassing larger models in performance.
LLM4SR: A Survey on Large Language Models for Scientific Research
·2870 words·14 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 University of Texas at Dallas
LLMs revolutionize scientific research! This survey reveals their transformative potential across hypothesis discovery, experiment planning, writing, and peer review, guiding future research.
EpiCoder: Encompassing Diversity and Complexity in Code Generation
·5051 words·24 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Tsinghua University
EpiCoder revolutionizes code generation by using feature trees to create diverse and complex training data, resulting in state-of-the-art performance on various benchmarks.
Entropy-Guided Attention for Private LLMs
·5203 words·25 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 New York University
Boosting private LLMs’ efficiency and security, this research introduces an entropy-guided attention mechanism and PI-friendly layer normalization to mitigate the overheads of nonlinear operations.
BoostStep: Boosting mathematical capability of Large Language Models via improved single-step reasoning
·2687 words·13 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Shanghai AI Laboratory
BoostStep enhances large language models’ mathematical abilities by refining single-step reasoning through a novel step-level in-context learning strategy, achieving significant improvements on variou…
ToolHop: A Query-Driven Benchmark for Evaluating Large Language Models in Multi-Hop Tool Use
·3646 words·18 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 ByteDance
ToolHop: New benchmark dataset rigorously evaluates LLMs’ multi-hop tool use, revealing significant challenges and variations across different LLM families.
Test-time Computing: from System-1 Thinking to System-2 Thinking
·658 words·4 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Soochow University
Unlocking LLM potential: This paper surveys test-time computing, showing how it boosts reasoning abilities by shifting from reactive System-1 to deliberate System-2 thinking, paving the way for more p…
Scaling Laws for Floating Point Quantization Training
·6363 words·30 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Tencent AI Lab
New scaling laws for efficient floating-point quantization training in LLMs are presented, showing optimal bit allocation and critical data size.
REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models
·1374 words·7 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 String
REINFORCE++, a novel RLHF algorithm, achieves superior training stability and computational efficiency compared to existing methods like PPO and GRPO, while maintaining comparable performance.