Skip to main content

🏢 Huawei Noah's Ark Lab

VeLoRA: Memory Efficient Training using Rank-1 Sub-Token Projections
·1706 words·9 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Huawei Noah's Ark Lab
VeLoRA: Train massive LLMs efficiently by compressing intermediate activations!
UNIT: Unifying Image and Text Recognition in One Vision Encoder
·1581 words·8 mins· loading · loading
Multimodal Learning Vision-Language Models 🏢 Huawei Noah's Ark Lab
UNIT: One Vision Encoder Unifies Image & Text Recognition!
Star-Agents: Automatic Data Optimization with LLM Agents for Instruction Tuning
·1847 words·9 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Huawei Noah's Ark Lab
Star-Agents automates data optimization for instruction-tuned LLMs via multi-agent collaboration, achieving a 12% average performance boost.
Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exiting
·2148 words·11 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Huawei Noah's Ark Lab
Kangaroo: Double early exiting boosts LLM speed!
Enhancing Large Language Models through Adaptive Tokenizers
·1963 words·10 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Huawei Noah's Ark Lab
Adaptive tokenizers enhance LLMs by dynamically optimizing vocabulary during training, improving accuracy without increasing vocabulary size.
Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting
·1646 words·8 mins· loading · loading
🏢 Huawei Noah's Ark Lab
This paper presents a novel theoretical framework for multi-task regression using random matrix theory, offering precise performance estimations and a closed-form solution for optimal hyperparameter t…