Skip to main content

Natural Language Processing

GeAR: Generation Augmented Retrieval
·1952 words·10 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Question Answering 🏢 Microsoft Research
GeAR, a new retrieval model, boosts accuracy by combining document retrieval with fine-grained information generation, leading to better understanding and improved localization.
BoostStep: Boosting mathematical capability of Large Language Models via improved single-step reasoning
·2687 words·13 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Shanghai AI Laboratory
BoostStep enhances large language models’ mathematical abilities by refining single-step reasoning through a novel step-level in-context learning strategy, achieving significant improvements on variou…
ToolHop: A Query-Driven Benchmark for Evaluating Large Language Models in Multi-Hop Tool Use
·3646 words·18 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 ByteDance
ToolHop: New benchmark dataset rigorously evaluates LLMs’ multi-hop tool use, revealing significant challenges and variations across different LLM families.
Test-time Computing: from System-1 Thinking to System-2 Thinking
·658 words·4 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Soochow University
Unlocking LLM potential: This paper surveys test-time computing, showing how it boosts reasoning abilities by shifting from reactive System-1 to deliberate System-2 thinking, paving the way for more p…
Scaling Laws for Floating Point Quantization Training
·6363 words·30 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Tencent AI Lab
New scaling laws for efficient floating-point quantization training in LLMs are presented, showing optimal bit allocation and critical data size.
REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models
·1374 words·7 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 String
REINFORCE++, a novel RLHF algorithm, achieves superior training stability and computational efficiency compared to existing methods like PPO and GRPO, while maintaining comparable performance.
Personalized Graph-Based Retrieval for Large Language Models
·3633 words·18 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 University of California Santa Cruz
Personalized Graph-based Retrieval-Augmented Generation (PGraphRAG) significantly improves personalized text generation by leveraging user-centric knowledge graphs, especially in cold-start scenarios …
METAGENE-1: Metagenomic Foundation Model for Pandemic Monitoring
·3440 words·17 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 University of Southern California
METAGENE-1, a 7-billion parameter language model, achieves state-of-the-art results in pathogen detection and genomic embedding by leveraging a massive wastewater metagenomic dataset.
Auto-RT: Automatic Jailbreak Strategy Exploration for Red-Teaming Large Language Models
·3986 words·19 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Ant Group
AUTO-RT automates LLM vulnerability discovery by using reinforcement learning to optimize complex attack strategies, achieving faster detection and higher success rates than existing methods.
Dynamic Scaling of Unit Tests for Code Reward Modeling
·3208 words·16 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Tsinghua University
Boosting code generation accuracy with more unit tests! This research shows that increasing the number of unit tests used to evaluate code generated by LLMs significantly improves accuracy, especially…
CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings
·2397 words·12 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Alibaba Group
CODEELO benchmark uses CodeForces to fairly evaluate LLMs’ coding abilities, providing human-comparable Elo ratings and addressing limitations of existing benchmarks.
BoxingGym: Benchmarking Progress in Automated Experimental Design and Model Discovery
·4247 words·20 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Stanford University
BoxingGym: A new benchmark rigorously evaluates AI agents’ ability to design experiments and discover scientific models, revealing current LLMs’ limitations and highlighting fertile research avenues.
LUSIFER: Language Universal Space Integration for Enhanced Multilingual Embeddings with Large Language Models
·4898 words·23 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 University of Oregon
LUSIFER: a novel zero-shot approach empowers English-centric LLM embedding models for multilingual tasks without explicit multilingual training data, significantly enhancing performance, especially fo…
Understanding and Mitigating Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing
·3334 words·16 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 University of Texas at Austin
Polarizing SSMs’ state transition matrices enhances long-range dependency modeling by mitigating recency bias and over-smoothing.
TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching and Clap-Ranked Preference Optimization
·3050 words·15 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Text Generation 🏢 Singapore University of Technology and Design
TANGOFLUX: Blazing-fast, high-fidelity text-to-audio generation using novel CLAP-Ranked Preference Optimization.
MapQaTor: A System for Efficient Annotation of Map Query Datasets
·3496 words·17 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Question Answering 🏢 Department of Computer Science and Engineering
MAPQATOR: a web app that streamlines creation of reproducible geospatial QA datasets, boosting annotation speed by 30x!
HumanEval Pro and MBPP Pro: Evaluating Large Language Models on Self-invoking Code Generation
·3981 words·19 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Tsinghua University
New benchmarks, HumanEval Pro and MBPP Pro, reveal LLMs struggle with self-invoking code generation, highlighting a critical gap in current code reasoning capabilities.
Facilitating large language model Russian adaptation with Learned Embedding Propagation
·2350 words·12 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Lomonosov Moscow State University
Researchers introduce Learned Embedding Propagation (LEP), a novel technique that efficiently adapts large language models (LLMs) to new languages using minimal training data, thus overcoming limitati…
Efficiently Serving LLM Reasoning Programs with Certaindex
·4124 words·20 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 UC San Diego
Dynasor optimizes LLM reasoning by dynamically allocating compute based on a novel ‘certaindex’ metric, reducing compute by up to 50% and increasing query rates by 3.3x.
OneKE: A Dockerized Schema-Guided LLM Agent-based Knowledge Extraction System
·379 words·2 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Information Extraction 🏢 Zhejiang University
OneKE: a dockerized, schema-guided LLM agent system efficiently extracts knowledge from diverse sources, offering adaptability and robust error handling.