Skip to main content

Posters

2024

Long-range Meta-path Search on Large-scale Heterogeneous Graphs
·2383 words·12 mins· loading · loading
Machine Learning Representation Learning 🏢 Huazhong University of Science and Technology
LMSPS: a novel framework efficiently leverages long-range dependencies in large heterogeneous graphs by dynamically identifying effective meta-paths, mitigating computational costs and over-smoothing.
Long-Range Feedback Spiking Network Captures Dynamic and Static Representations of the Visual Cortex under Movie Stimuli
·2020 words·10 mins· loading · loading
Computer Vision Video Understanding 🏢 Peking University
Long-range feedback spiking network (LoRaFB-SNet) surpasses other models in capturing dynamic and static visual cortical representations under movie stimuli, advancing our understanding of visual syst…
Long-range Brain Graph Transformer
·2187 words·11 mins· loading · loading
AI Applications Healthcare 🏢 Dalian University of Technology
ALTER, a novel brain graph transformer, leverages long-range dependencies to achieve state-of-the-art accuracy in neurological disease diagnosis.
Long-Horizon Planning for Multi-Agent Robots in Partially Observable Environments
·2334 words·11 mins· loading · loading
AI Applications Robotics 🏢 MIT
LLaMAR: LM-based planner for multi-agent robots excels in long-horizon, partially observable tasks, achieving 30% higher success than existing methods.
Long-form factuality in large language models
·4779 words·23 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Google DeepMind
LLMs often generate factually inaccurate long-form text. This work introduces LongFact, a new benchmark dataset of 2280 fact-seeking prompts, and SAFE, a novel automated evaluation method that outperf…
Loki: Low-rank Keys for Efficient Sparse Attention
·3255 words·16 mins· loading · loading
Natural Language Processing Large Language Models 🏢 University of Maryland
Loki: Low-rank Keys for Efficient Sparse Attention accelerates attention mechanisms in LLMs by exploiting the low-dimensionality of key vectors. It dynamically selects key tokens based on approximate…
Logical characterizations of recurrent graph neural networks with reals and floats
·334 words·2 mins· loading · loading
AI Theory Representation Learning 🏢 Tampere University
Recurrent Graph Neural Networks (GNNs) with real and floating-point numbers are precisely characterized by rule-based and infinitary modal logics, respectively, enabling a deeper understanding of thei…
Log-concave Sampling from a Convex Body with a Barrier: a Robust and Unified Dikin Walk
·1308 words·7 mins· loading · loading
AI Theory Optimization 🏢 New York University
This paper introduces robust Dikin walks for log-concave sampling, achieving faster mixing times and lower iteration costs than existing methods, particularly for high-dimensional settings.
LoFiT: Localized Fine-tuning on LLM Representations
·4045 words·19 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏢 University of Texas at Austin
LOFIT: Localized fine-tuning boosts LLMs’ performance by selectively training only a small subset of attention heads, achieving comparable accuracy to other methods while using significantly fewer par…
LoD-Loc: Aerial Visual Localization using LoD 3D Map with Neural Wireframe Alignment
·3487 words·17 mins· loading · loading
Computer Vision 3D Vision 🏢 National University of Defense Technology
LoD-Loc: A novel aerial visual localization method uses lightweight LoD 3D maps & neural wireframe alignment for accurate and efficient 6-DoF pose estimation, surpassing state-of-the-art methods.
LoCo: Learning 3D Location-Consistent Image Features with a Memory-Efficient Ranking Loss
·1960 words·10 mins· loading · loading
Computer Vision 3D Vision 🏢 University of Oxford
LoCo: Memory-efficient location-consistent image features learned via a novel ranking loss, enabling three orders of magnitude memory improvement and outperforming state-of-the-art.
LocCa: Visual Pretraining with Location-aware Captioners
·2114 words·10 mins· loading · loading
Multimodal Learning Vision-Language Models 🏢 Google DeepMind
LocCa, a novel visual pretraining paradigm, uses location-aware captioning tasks to boost downstream localization performance while maintaining holistic task capabilities.
Locating What You Need: Towards Adapting Diffusion Models to OOD Concepts In-the-Wild
·3829 words·18 mins· loading · loading
AI Generated Computer Vision Image Generation 🏢 Zhejiang University
CATOD framework improves text-to-image generation by actively learning high-quality training data to accurately depict out-of-distribution concepts.
Locally Private and Robust Multi-Armed Bandits
·1621 words·8 mins· loading · loading
AI Theory Privacy 🏢 Wayne State University
This research unveils a fundamental interplay between local differential privacy (LDP) and robustness against data corruption and heavy-tailed rewards in multi-armed bandits, offering a tight characte…
Localizing Memorization in SSL Vision Encoders
·4999 words·24 mins· loading · loading
Machine Learning Self-Supervised Learning 🏢 CISPA, Helmholtz Center for Information Security
SSL vision encoders, while trained on massive datasets, surprisingly memorize individual data points. This paper introduces novel methods to precisely pinpoint this memorization within encoders at bot…
Localized Adaptive Risk Control
·2386 words·12 mins· loading · loading
AI Generated AI Theory Fairness 🏢 University of Cambridge
Localized Adaptive Risk Control (L-ARC) improves fairness and reliability of online prediction by providing localized statistical risk guarantees, surpassing existing methods in high-stakes applicatio…
Localize, Understand, Collaborate: Semantic-Aware Dragging via Intention Reasoner
·2916 words·14 mins· loading · loading
AI Generated Computer Vision Image Generation 🏢 Beijing University of Posts and Telecommunications
LucidDrag: Semantic-aware dragging transforms image editing with an intention reasoner and collaborative guidance, achieving superior accuracy, image fidelity, and semantic diversity.
Local to Global: Learning Dynamics and Effect of Initialization for Transformers
·2433 words·12 mins· loading · loading
AI Generated Natural Language Processing Text Generation 🏢 EPFL
Transformers’ learning dynamics depend heavily on initialization and Markovian data properties, leading to either global or local minima; this paper proves this, offers initialization guidelines, and …
Local Superior Soups: A Catalyst for Model Merging in Cross-Silo Federated Learning
·3305 words·16 mins· loading · loading
AI Generated Machine Learning Federated Learning 🏢 University of British Columbia
Local Superior Soups (LSS) significantly accelerates federated learning by efficiently merging pre-trained models, drastically cutting communication rounds without sacrificing accuracy.
Local Linearity: the Key for No-regret Reinforcement Learning in Continuous MDPs
·451 words·3 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 Politecnico Di Milano
CINDERELLA: a new algorithm achieves state-of-the-art no-regret bounds for continuous RL problems by exploiting local linearity.