Skip to main content

Posters

2024

Lisa: Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning Attack
·2933 words·14 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Georgia Institute of Technology
Lisa: a novel lazy safety alignment method safeguards LLMs against harmful fine-tuning attacks by introducing a proximal term to constrain model drift, significantly improving alignment performance.
LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning
·3222 words·16 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Hong Kong University of Science and Technology
LISA, a layerwise importance sampling method, dramatically improves memory-efficient large language model fine-tuning, outperforming existing methods while using less GPU memory.
Lips Are Lying: Spotting the Temporal Inconsistency between Audio and Visual in Lip-Syncing DeepFakes
·2315 words·11 mins· loading · loading
Multimodal Learning Vision-Language Models 🏢 Carnegie Mellon University
LipFD: a novel method leverages audio-visual inconsistencies to accurately spot lip-syncing deepfakes, outperforming existing methods and introducing a high-quality dataset for future research.
LION: Linear Group RNN for 3D Object Detection in Point Clouds
·3911 words·19 mins· loading · loading
AI Generated Computer Vision Object Detection 🏢 Huazhong University of Science and Technology
LION: Linear Group RNNs conquer 3D object detection in sparse point clouds by enabling efficient long-range feature interaction, significantly outperforming transformer-based methods.
LinNet: Linear Network for Efficient Point Cloud Representation Learning
·2362 words·12 mins· loading · loading
Computer Vision 3D Vision 🏢 Northwest University
LinNet: A linear-time point cloud network achieving 10x speedup over PointNeXt, with state-of-the-art accuracy on various benchmarks.
Linking In-context Learning in Transformers to Human Episodic Memory
·3883 words·19 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏢 UC San Diego
Transformers’ in-context learning mirrors human episodic memory, with specific attention heads acting like the brain’s contextual maintenance and retrieval system.
Linguistic Collapse: Neural Collapse in (Large) Language Models
·6528 words·31 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏢 University of Toronto
Scaling causal language models reveals a connection between neural collapse properties, model size, and improved generalization, highlighting NC’s broader relevance to LLMs.
Linearly Decomposing and Recomposing Vision Transformers for Diverse-Scale Models
·2125 words·10 mins· loading · loading
Computer Vision Image Classification 🏢 School of Computer Science and Engineering, Southeast University
Linearly decompose & recompose Vision Transformers to create diverse-scale models efficiently, reducing computational costs & improving flexibility for various applications.
Linear Uncertainty Quantification of Graphical Model Inference
·1866 words·9 mins· loading · loading
Machine Learning Active Learning 🏢 Key Laboratory of Trustworthy Distributed Computing and Service (MoE), Beijing University of Posts and Telecommunications
LinUProp: Linearly scalable uncertainty quantification for graphical models, achieving higher accuracy with lower labeling budgets!
Linear Transformers are Versatile In-Context Learners
·1783 words·9 mins· loading · loading
Machine Learning Optimization 🏢 Google Research
Linear transformers surprisingly learn intricate optimization algorithms, even surpassing baselines on noisy regression problems, showcasing their unexpected learning capabilities.
Linear Causal Representation Learning from Unknown Multi-node Interventions
·418 words·2 mins· loading · loading
AI Theory Causality 🏢 Carnegie Mellon University
Unlocking Causal Structures: New algorithms identify latent causal relationships from interventions, even when multiple variables are affected simultaneously.
Linear Causal Bandits: Unknown Graph and Soft Interventions
·1964 words·10 mins· loading · loading
AI Theory Causality 🏢 Rensselaer Polytechnic Institute
Causal bandits with unknown graphs and soft interventions are solved by establishing novel upper and lower regret bounds, plus a computationally efficient algorithm.
Limits of Transformer Language Models on Learning to Compose Algorithms
·2755 words·13 mins· loading · loading
Natural Language Processing Large Language Models 🏢 IBM Research
Large Language Models struggle with compositional tasks, requiring exponentially more data than expected for learning compared to learning sub-tasks individually. This paper reveals surprising sample …
Lightweight Frequency Masker for Cross-Domain Few-Shot Semantic Segmentation
·3232 words·16 mins· loading · loading
AI Generated Computer Vision Image Segmentation 🏢 Huazhong University of Science and Technology
Lightweight Frequency Masker significantly improves cross-domain few-shot semantic segmentation by cleverly filtering frequency components of images, thereby reducing inter-channel correlation and enh…
Lighting Every Darkness with 3DGS: Fast Training and Real-Time Rendering for HDR View Synthesis
·3953 words·19 mins· loading · loading
AI Generated Computer Vision 3D Vision 🏢 Nankai University
LE3D: Real-time HDR view synthesis from noisy RAW images is achieved using 3DGS, significantly reducing training time and improving rendering speed.
Light Unbalanced Optimal Transport
·2953 words·14 mins· loading · loading
Machine Learning Optimization 🏢 Skolkovo Institute of Science and Technology
LightUnbalancedOptimalTransport: A fast, theoretically-justified solver for continuous unbalanced optimal transport problems, enabling efficient analysis of large datasets with imbalanced classes.
LG-VQ: Language-Guided Codebook Learning
·3656 words·18 mins· loading · loading
Multimodal Learning Vision-Language Models 🏢 Harbin Institute of Technology
LG-VQ: A novel language-guided codebook learning framework boosts multi-modal performance.
LG-CAV: Train Any Concept Activation Vector with Language Guidance
·3860 words·19 mins· loading · loading
AI Generated Computer Vision Vision-Language Models 🏢 Zhejiang University
LG-CAV: Train any Concept Activation Vector with Language Guidance, leverages vision-language models to train CAVs without labeled data, achieving superior accuracy and enabling state-of-the-art model…
LFME: A Simple Framework for Learning from Multiple Experts in Domain Generalization
·2799 words·14 mins· loading · loading
Machine Learning Domain Generalization 🏢 MBZUAI
LFME: A novel framework improves domain generalization by training multiple expert models alongside a target model, using logit regularization for enhanced performance.
Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding
·2610 words·13 mins· loading · loading
Computer Vision Scene Understanding 🏢 University of Illinois Urbana-Champaign
Lexicon3D: a first comprehensive study probing diverse visual foundation models for superior 3D scene understanding, revealing that unsupervised image models outperform others across various tasks.