Large Language Models
We Can't Understand AI Using our Existing Vocabulary
·3226 words·16 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ Google DeepMind
To understand AI, we need new words! This paper argues that developing neologismsβnew words for human & machine conceptsβis key to bridging the communication gap and achieving better AI control.
LLMs Can Easily Learn to Reason from Demonstrations Structure, not content, is what matters!
·3137 words·15 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ UC Berkeley
LLMs can be effectively taught complex reasoning via efficient fine-tuning on demonstration data focusing on structure, not content, of the reasoning process.
LASP-2: Rethinking Sequence Parallelism for Linear Attention and Its Hybrid
·2654 words·13 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ Shanghai AI Laboratory
LASP-2 revolutionizes linear attention training by achieving 36.6% faster speeds than Ring Attention via a novel sequence parallelism method, boosting efficiency for very long sequences.
CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction
·5174 words·25 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ Hong Kong University of Science and Technology
CODEI/O: Condensing reasoning patterns from code into LLM training data for enhanced reasoning.
Auditing Prompt Caching in Language Model APIs
·5759 words·28 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ Stanford University
Researchers expose widespread prompt caching in LLMs via novel timing attacks, highlighting significant privacy risks and model architecture leakage.
SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data Annotators
·5896 words·28 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ AIRI
SynthDetoxM generates high-quality multilingual parallel data for text detoxification using LLMs, outperforming existing datasets and models in few-shot settings.
Steel-LLM:From Scratch to Open Source -- A Personal Journey in Building a Chinese-Centric LLM
·3355 words·16 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ Tsinghua University
Steel-LLM: A fully open-source, resource-efficient Chinese LLM trained with transparency, achieving competitive performance despite limited resources.
ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates
·2360 words·12 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ Princeton University
ReasonFlux boosts LLM mathematical reasoning by using hierarchical thought templates, outperforming top LLMs like OpenAI’s 01-preview and DeepSeek V3.
Matryoshka Quantization
·9741 words·46 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ Google DeepMind
Matryoshka Quantization (MatQuant) boosts low-precision model accuracy by up to 10% through a novel multi-scale training approach. It leverages the nested structure of integer data types, allowing a …
Ignore the KL Penalty! Boosting Exploration on Critical Tokens to Enhance RL Fine-Tuning
·3104 words·15 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ UniversitΓ© Paris-Saclay
Boosting RL fine-tuning efficiency in LLMs: A novel KL penalty modification prioritizes exploration on critical tokens, dramatically improving model performance on arithmetic tasks.
Hephaestus: Improving Fundamental Agent Capabilities of Large Language Models through Continual Pre-Training
·3376 words·16 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ Amazon
Hephaestus-Forge, a new large-scale pre-training corpus, significantly boosts LLM agent capabilities in API function calling, reasoning, and adaptability through continual pre-training.
Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning
·1736 words·9 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ Shanghai AI Laboratory
OREAL, a novel RL framework, achieves state-of-the-art mathematical reasoning in LLMs using only binary outcome rewards, demonstrating that a 7B model can match the performance of 32B models.
Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling
·3884 words·19 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ Tsinghua University
Smaller LLMs can outperform larger ones by strategically increasing computation during inference, defying conventional LLM scaling.
Training Language Models for Social Deduction with Multi-Agent Reinforcement Learning
·507 words·3 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ Stanford University
Language models learn effective social deduction strategies in a virtual game by using their goal to predict useful information as a dense reward signal, doubling win rates compared to standard RL.
The Curse of Depth in Large Language Models
·2429 words·12 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ Medical Artificial Intelligence Laboratory, Westlake University
Deep layers in LLMs underperform due to Pre-Layer Normalization; LayerNorm Scaling resolves this by controlling output variance, significantly improving training efficiency.
LM2: Large Memory Models
·2722 words·13 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ Convergence Labs Ltd
LM2: Large Memory Models enhance Transformers by adding an auxiliary memory module, significantly improving multi-step reasoning and long-context information synthesis.
APE: Faster and Longer Context-Augmented Generation via Adaptive Parallel Encoding
·6090 words·29 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ Carnegie Mellon University
APE: a novel method significantly speeds up context-augmented generation (CAG). By using adaptive parallel encoding, APE achieves a 4.5x speedup and maintains high accuracy even with 128K length cont…
Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach
·5939 words·28 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ University of Maryland
Boost LLM reasoning power at test time by recursively processing latent information, enabling dramatic performance gains with fewer parameters.
QuEST: Stable Training of LLMs with 1-Bit Weights and Activations
·3320 words·16 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ ISTA
QuEST enables stable, accurate LLM training using only 1-bit weights and activations, achieving Pareto-optimal performance compared to higher-precision models.
Generating Symbolic World Models via Test-time Scaling of Large Language Models
·2722 words·13 mins·
loading
·
loading
AI Generated
π€ Daily Papers
Natural Language Processing
Large Language Models
π’ Hong Kong University of Science and Technology
LLMs excel at complex reasoning but struggle with planning; this paper introduces a test-time scaling approach that enhances LLMs’ PDDL reasoning, enabling high-quality PDDL domain generation, outperf…