Skip to main content

🏢 Renmin University of China

Challenging the Boundaries of Reasoning: An Olympiad-Level Math Benchmark for Large Language Models
·5419 words·26 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Renmin University of China
OlymMATH: A new Olympiad-level math benchmark rigorously tests LLMs’ reasoning, revealing limitations and paving the way for advancements.
ETVA: Evaluation of Text-to-Video Alignment via Fine-grained Question Generation and Answering
·3338 words·16 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Renmin University of China
ETVA evaluates text-to-video alignment via fine-grained question generation and answering.
MathFusion: Enhancing Mathematic Problem-solving of LLM through Instruction Fusion
·2769 words·13 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Renmin University of China
MathFusion: Instruction Fusion enhances LLM’s math problem-solving!
Perplexity Trap: PLM-Based Retrievers Overrate Low Perplexity Documents
·3678 words·18 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Information Extraction 🏢 Renmin University of China
PLM retrievers overrate low-perplexity docs, causing source bias. This paper reveals the causal effect & offers a fix!
SEAP: Training-free Sparse Expert Activation Pruning Unlock the Brainpower of Large Language Models
·3962 words·19 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Renmin University of China
SEAP: Unlock LLM brainpower w/ training-free sparse expert activation pruning! Boost efficiency, keep accuracy. Optimize LLMs now!
Effective and Efficient Masked Image Generation Models
·4167 words·20 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision Image Generation 🏢 Renmin University of China
eMIGM: A unified, efficient masked image generation model achieving state-of-the-art performance with fewer resources.
R1-Searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning
·3585 words·17 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Question Answering 🏢 Renmin University of China
R1-Searcher: RL enhances LLMs by incentivizing autonomous search, outperforming RAG methods, even GPT-4o-mini!
An Empirical Study on Eliciting and Improving R1-like Reasoning Models
·3690 words·18 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Renmin University of China
This paper explores and improves R1-like reasoning models through RL and tool manipulation, achieving significant accuracy gains.
SurveyX: Academic Survey Automation via Large Language Models
·2720 words·13 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Renmin University of China
SURVEYX automates academic survey generation, enhancing content and citation quality.
YuLan-Mini: An Open Data-efficient Language Model
·4206 words·20 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Renmin University of China
YuLan-Mini: An open, data-efficient 2.42B parameter LLM achieving top-tier performance with innovative training techniques.
RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation
·4628 words·22 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Question Answering 🏢 Renmin University of China
RetroLLM unifies retrieval & generation in LLMs, boosting accuracy and cutting costs.
HtmlRAG: HTML is Better Than Plain Text for Modeling Retrieved Knowledge in RAG Systems
·2200 words·11 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Question Answering 🏢 Renmin University of China
HtmlRAG boosts RAG system accuracy by using HTML, not plain text, to model retrieved knowledge, improving knowledge representation and mitigating LLM hallucination.