Skip to main content

🏢 Emory University

SparseLLM: Towards Global Pruning of Pre-trained Language Models
·2184 words·11 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Emory University
SparseLLM globally prunes large language models efficiently by decomposing the problem into manageable subproblems, achieving significant performance improvements, especially at high sparsity.