Skip to main content

Natural Language Processing

Weighted-Reward Preference Optimization for Implicit Model Fusion
·4595 words·22 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 School of Computer Science and Engineering, Sun Yat-Sen University
WRPO: Implicitly fuse LLMs, boosting performance without complex alignment or merging!
Robust Multi-bit Text Watermark with LLM-based Paraphrasers
·3046 words·15 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 ByteDance Research
Researchers developed a robust multi-bit text watermarking method using LLMs for paraphrasing, achieving over 99.99% detection accuracy while maintaining semantic information and resisting common atta…
Evaluating Language Models as Synthetic Data Generators
·4403 words·21 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 Carnegie Mellon University
AGORABENCH: A new benchmark reveals surprising strengths & weaknesses of LMs as synthetic data generators, showing that problem-solving ability isn’t the sole indicator of data quality.
OCR Hinders RAG: Evaluating the Cascading Impact of OCR on Retrieval-Augmented Generation
·4800 words·23 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 Peking University
Imperfect OCR hinders Retrieval-Augmented Generation (RAG). OHRBench, a new benchmark, reveals this cascading impact, showing current OCR solutions insufficient for high-quality RAG knowledge bases. …
Towards Cross-Lingual Audio Abuse Detection in Low-Resource Settings with Few-Shot Learning
·1712 words·9 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Text Classification 🏒 Telecom SudParis
Few-shot learning empowers cross-lingual audio abuse detection using pre-trained models, achieving high accuracy in low-resource Indian languages.
Free Process Rewards without Process Labels
·3126 words·15 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 Tsinghua University
Train high-performing Process Reward Models (PRMs) cheaply using only outcome-level labels, eliminating the need for costly step-by-step annotations!
o1-Coder: an o1 Replication for Coding
·1672 words·8 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 Beijing Jiaotong University
O1-CODER replicates OpenAI’s o1 model for coding, integrating reinforcement learning and Monte Carlo Tree Search to enhance System-2 thinking and generate high-quality code with reasoning steps.
LLM Teacher-Student Framework for Text Classification With No Manually Annotated Data: A Case Study in IPTC News Topic Classification
·2350 words·12 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Text Classification 🏒 JoΕΎef Stefan Institute
Researchers developed a multilingual news topic classifier using a teacher-student framework and GPT-40 for automatic data annotation, achieving high performance without manual annotation.
KV Shifting Attention Enhances Language Modeling
·5293 words·25 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 Baichuan Inc.
KV Shifting Attention: A novel attention mechanism significantly enhances language modeling by simplifying induction heads, leading to improved performance and faster convergence, even in large-scale …
INCLUDE: Evaluating Multilingual Language Understanding with Regional Knowledge
·7526 words·36 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 EPFL
New multilingual LLM benchmark, INCLUDE, tackles regional knowledge gaps by using 197K QA pairs from 44 languages, improving cross-lingual evaluation.
Critical Tokens Matter: Token-Level Contrastive Estimation Enhances LLM's Reasoning Capability
·2134 words·11 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 Tencent AI Lab
Boosting LLMs’ reasoning: A novel token-level contrastive estimation method automatically identifies and penalizes critical tokens leading to errors, significantly enhancing reasoning accuracy.
A Simple and Provable Scaling Law for the Test-Time Compute of Large Language Models
·1730 words·9 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 Alibaba Group
Boost LLM accuracy exponentially by using a two-stage algorithm with provable scaling laws: generate multiple candidate solutions then compare them in a knockout tournament!
A dynamic parallel method for performance optimization on hybrid CPUs
·1564 words·8 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 Intel Corporation
Dynamic parallel processing boosts LLM inference speed on hybrid CPUs by over 90% memory bandwidth, resolving performance bottlenecks caused by imbalanced hardware capabilities.
Puzzle: Distillation-Based NAS for Inference-Optimized LLMs
·4724 words·23 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 NVIDIA
Puzzle: a novel framework accelerates large language model inference by using neural architecture search and knowledge distillation, achieving a 2.17x speedup on a single GPU while preserving 98.4% ac…
Training and Evaluating Language Models with Template-based Data Generation
·415 words·2 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 Tsinghua University
Researchers created TemplateGSM, a massive dataset of 7M+ grade-school math problems and solutions, using GPT-4 to generate templates, significantly advancing LLM training for mathematical reasoning.
Draft Model Knows When to Stop: A Self-Verification Length Policy for Speculative Decoding
·2920 words·14 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 Tencent AI Lab
Self-VerIfication length Policy (SVIP) dynamically adjusts speculative decoding draft lengths based on token difficulty, achieving up to 20% faster large language model inference.
Beyond Examples: High-level Automated Reasoning Paradigm in In-Context Learning via MCTS
·2022 words·10 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 Tsinghua University
HiAR-ICL, a novel automated reasoning paradigm using Monte Carlo Tree Search, surpasses state-of-the-art accuracy in complex mathematical reasoning by shifting focus from specific examples to abstract…
Star Attention: Efficient LLM Inference over Long Sequences
·5535 words·26 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 NVIDIA
Star Attention: 11x faster LLM inference on long sequences with 95-100% accuracy!
Low-Bit Quantization Favors Undertrained LLMs: Scaling Laws for Quantized LLMs with 100T Training Tokens
·3397 words·16 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Large Language Models 🏒 Tencent AI Lab
Low-bit quantization excels for undertrained LLMs but struggles with fully-trained ones; new scaling laws reveal this, directing future research.
LongKey: Keyphrase Extraction for Long Documents
·3409 words·17 mins· loading · loading
AI Generated πŸ€— Daily Papers Natural Language Processing Information Extraction 🏒 University of Luxembourg
LongKey: A novel framework excels at extracting keyphrases from lengthy documents using an encoder-based language model and max-pooling, outperforming existing methods across diverse datasets.