Large Language Models
Personalized Graph-Based Retrieval for Large Language Models
·3633 words·18 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 University of California Santa Cruz
Personalized Graph-based Retrieval-Augmented Generation (PGraphRAG) significantly improves personalized text generation by leveraging user-centric knowledge graphs, especially in cold-start scenarios …
METAGENE-1: Metagenomic Foundation Model for Pandemic Monitoring
·3440 words·17 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 University of Southern California
METAGENE-1, a 7-billion parameter language model, achieves state-of-the-art results in pathogen detection and genomic embedding by leveraging a massive wastewater metagenomic dataset.
Auto-RT: Automatic Jailbreak Strategy Exploration for Red-Teaming Large Language Models
·3986 words·19 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Ant Group
AUTO-RT automates LLM vulnerability discovery by using reinforcement learning to optimize complex attack strategies, achieving faster detection and higher success rates than existing methods.
Dynamic Scaling of Unit Tests for Code Reward Modeling
·3208 words·16 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Tsinghua University
Boosting code generation accuracy with more unit tests! This research shows that increasing the number of unit tests used to evaluate code generated by LLMs significantly improves accuracy, especially…
CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings
·2397 words·12 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Alibaba Group
CODEELO benchmark uses CodeForces to fairly evaluate LLMs’ coding abilities, providing human-comparable Elo ratings and addressing limitations of existing benchmarks.
BoxingGym: Benchmarking Progress in Automated Experimental Design and Model Discovery
·4247 words·20 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Stanford University
BoxingGym: A new benchmark rigorously evaluates AI agents’ ability to design experiments and discover scientific models, revealing current LLMs’ limitations and highlighting fertile research avenues.
LUSIFER: Language Universal Space Integration for Enhanced Multilingual Embeddings with Large Language Models
·4898 words·23 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 University of Oregon
LUSIFER: a novel zero-shot approach empowers English-centric LLM embedding models for multilingual tasks without explicit multilingual training data, significantly enhancing performance, especially fo…
Understanding and Mitigating Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing
·3334 words·16 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 University of Texas at Austin
Polarizing SSMs’ state transition matrices enhances long-range dependency modeling by mitigating recency bias and over-smoothing.
HumanEval Pro and MBPP Pro: Evaluating Large Language Models on Self-invoking Code Generation
·3981 words·19 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Tsinghua University
New benchmarks, HumanEval Pro and MBPP Pro, reveal LLMs struggle with self-invoking code generation, highlighting a critical gap in current code reasoning capabilities.
Facilitating large language model Russian adaptation with Learned Embedding Propagation
·2350 words·12 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Lomonosov Moscow State University
Researchers introduce Learned Embedding Propagation (LEP), a novel technique that efficiently adapts large language models (LLMs) to new languages using minimal training data, thus overcoming limitati…
Efficiently Serving LLM Reasoning Programs with Certaindex
·4124 words·20 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 UC San Diego
Dynasor optimizes LLM reasoning by dynamically allocating compute based on a novel ‘certaindex’ metric, reducing compute by up to 50% and increasing query rates by 3.3x.
Xmodel-2 Technical Report
·2582 words·13 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Xiaoduo AI Lab
Xmodel-2: A 1.2B parameter LLM achieving state-of-the-art reasoning performance through efficient architecture and training, now publicly available!
Safeguard Fine-Tuned LLMs Through Pre- and Post-Tuning Model Merging
·269 words·2 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Intel Labs
Boost fine-tuned LLMs’ performance without sacrificing safety by merging pre- and post-tuning model weights!
Token-Budget-Aware LLM Reasoning
·3147 words·15 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Nanjing University
TALE: A novel framework dynamically adjusts token budgets in LLM reasoning prompts, slashing costs by ~70% with minimal accuracy loss.
Molar: Multimodal LLMs with Collaborative Filtering Alignment for Enhanced Sequential Recommendation
·2542 words·12 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 University of Science and Technology of China
Molar: A novel multimodal LLM framework boosts sequential recommendation accuracy by cleverly aligning collaborative filtering with rich item representations from text and non-text data.
YuLan-Mini: An Open Data-efficient Language Model
·4206 words·20 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Renmin University of China
YuLan-Mini: An open, data-efficient 2.42B parameter LLM achieving top-tier performance with innovative training techniques.
In Case You Missed It: ARC 'Challenge' Is Not That Challenging
·2565 words·13 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Snowflake AI Research
LLM evaluation on multiple-choice questions is flawed; considering all options simultaneously, not individually, reveals much higher accuracy and challenges existing benchmark rankings.
Fourier Position Embedding: Enhancing Attention's Periodic Extension for Length Generalization
·2203 words·11 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Tsinghua University
FoPE enhances attention’s periodic extension for better length generalization in language models by addressing spectral damage in RoPE using Fourier Series and zeroing out destructive frequencies.
Deliberation in Latent Space via Differentiable Cache Augmentation
·3569 words·17 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Google DeepMind
Frozen LLMs get a performance boost by augmenting their key-value cache with latent embeddings generated by a differentiable offline coprocessor.
B-STaR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners
·2172 words·11 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 Hong Kong University of Science and Technology
B-STAR dynamically balances exploration and exploitation in self-taught reasoners, achieving superior performance in mathematical, coding, and commonsense reasoning tasks.