Skip to main content

🏢 Intel Labs

SQuARE: Sequential Question Answering Reasoning Engine for Enhanced Chain-of-Thought in Large Language Models
·4327 words·21 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Question Answering 🏢 Intel Labs
SQUARE, a novel prompting technique, enhances LLM reasoning by prompting self-interrogation through sequential question answering, significantly outperforming traditional methods.
Low-Rank Adapters Meet Neural Architecture Search for LLM Compression
·2154 words·11 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Intel Labs
Low-rank adapters combined with neural architecture search revolutionize LLM compression, enabling efficient fine-tuning and significantly reduced memory footprint.
Safeguard Fine-Tuned LLMs Through Pre- and Post-Tuning Model Merging
·269 words·2 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Intel Labs
Boost fine-tuned LLMs’ performance without sacrificing safety by merging pre- and post-tuning model weights!