Skip to main content

Posters

2024

Drago: Primal-Dual Coupled Variance Reduction for Faster Distributionally Robust Optimization
·1908 words·9 mins· loading · loading
AI Theory Optimization 🏢 University of Washington
DRAGO: A novel primal-dual algorithm delivers faster, state-of-the-art convergence for distributionally robust optimization.
DRACO: A Denoising-Reconstruction Autoencoder for Cryo-EM
·1954 words·10 mins· loading · loading
Computer Vision Image Generation 🏢 School of Information Science and Technology, ShanghaiTech University
DRACO, a denoising-reconstruction autoencoder, revolutionizes cryo-EM by leveraging a large-scale dataset and hybrid training for superior image denoising and downstream task performance.
Doubly Mild Generalization for Offline Reinforcement Learning
·2279 words·11 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏢 Tsinghua University
Doubly Mild Generalization (DMG) improves offline reinforcement learning by selectively leveraging generalization beyond training data, achieving state-of-the-art results.
Doubly Hierarchical Geometric Representations for Strand-based Human Hairstyle Generation
·2527 words·12 mins· loading · loading
Computer Vision Image Generation 🏢 Carnegie Mellon University
Doubly hierarchical geometric representations enable realistic human hairstyle generation by separating low and high-frequency details in hair strands, resulting in high-quality, detailed virtual hair…
DOPPLER: Differentially Private Optimizers with Low-pass Filter for Privacy Noise Reduction
·2545 words·12 mins· loading · loading
AI Theory Privacy 🏢 University of Southern California
DOPPLER, a novel low-pass filter, significantly enhances differentially private (DP) optimizer performance by reducing the impact of privacy noise, bridging the gap between DP and non-DP training.
Don't Compress Gradients in Random Reshuffling: Compress Gradient Differences
·2058 words·10 mins· loading · loading
Machine Learning Federated Learning 🏢 King Abdullah University of Science and Technology
Boost federated learning efficiency! This paper introduces novel algorithms that cleverly combine gradient compression with random reshuffling, significantly reducing communication complexity and impr…
DomainGallery: Few-shot Domain-driven Image Generation by Attribute-centric Finetuning
·1917 words·9 mins· loading · loading
Computer Vision Image Generation 🏢 Shanghai Jiao Tong University
DomainGallery: Few-shot domain-driven image generation via attribute-centric finetuning, solving key issues of previous works by introducing attribute erasure, disentanglement, regularization, and enh…
Domain Adaptation for Large-Vocabulary Object Detectors
·4715 words·23 mins· loading · loading
AI Generated Computer Vision Object Detection 🏢 State Key Laboratory of Integrated Services Networks, Xidian University
KGD: a novel knowledge graph distillation technique empowers large-vocabulary object detectors with superior cross-domain object classification, achieving state-of-the-art performance.
Doing Experiments and Revising Rules with Natural Language and Probabilistic Reasoning
·3039 words·15 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Cornell University
This paper introduces ActiveACRE, a model that uses LLMs and probabilistic inference to infer natural language rules through online experimentation, demonstrating higher accuracy than existing methods…
DOGS: Distributed-Oriented Gaussian Splatting for Large-Scale 3D Reconstruction Via Gaussian Consensus
·3216 words·16 mins· loading · loading
AI Generated Computer Vision 3D Vision 🏢 National University of Singapore
DOGS: Distributed-Oriented Gaussian Splatting accelerates large-scale 3D reconstruction by distributing the training of 3D Gaussian Splatting models across multiple machines, achieving 6x faster train…
DoFIT: Domain-aware Federated Instruction Tuning with Alleviated Catastrophic Forgetting
·2536 words·12 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏢 Nanjing University of Science and Technology
DoFIT: A novel domain-aware framework significantly reduces catastrophic forgetting in federated instruction tuning by finely aggregating overlapping weights and using a proximal perturbation initiali…
DOFEN: Deep Oblivious Forest ENsemble
·6861 words·33 mins· loading · loading
Machine Learning Deep Learning 🏢 Sinopac Holdings
DOFEN: Deep Oblivious Forest Ensemble achieves state-of-the-art performance on tabular data by using a novel DNN architecture inspired by oblivious decision trees, surpassing other DNNs.
Does Worst-Performing Agent Lead the Pack? Analyzing Agent Dynamics in Unified Distributed SGD
·1640 words·8 mins· loading · loading
AI Generated Machine Learning Federated Learning 🏢 North Carolina State University
A few high-performing agents using efficient sampling strategies can significantly boost the overall convergence speed of distributed machine learning algorithms, surpassing the performance of many mo…
Does Video-Text Pretraining Help Open-Vocabulary Online Action Detection?
·1983 words·10 mins· loading · loading
Computer Vision Action Recognition 🏢 Tongji University
Zero-shot online action detection gets a boost! OV-OAD leverages vision-language models and text supervision to achieve impressive performance on various benchmarks without relying on manual annotati…
Does Reasoning Emerge? Examining the Probabilities of Causation in Large Language Models
·2327 words·11 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Microsoft Research
LLMs’ reasoning abilities are assessed via a novel framework that leverages probabilities of causation, revealing that while capable, their understanding of causality falls short of human-level reason…
Does Egalitarian Fairness Lead to Instability? The Fairness Bounds in Stable Federated Learning Under Altruistic Behaviors
·1528 words·8 mins· loading · loading
Machine Learning Federated Learning 🏢 Southern University of Science and Technology
Achieving egalitarian fairness in federated learning without sacrificing stability is possible; this paper derives optimal fairness bounds considering clients’ altruism and network topology.
Do's and Don'ts: Learning Desirable Skills with Instruction Videos
·2781 words·14 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏢 KAIST
DoDont, a novel algorithm, uses instruction videos to guide unsupervised skill discovery, effectively learning desirable behaviors while avoiding undesirable ones in complex continuous control tasks.
Do LLMs dream of elephants (when told not to)? Latent concept association and associative memory in transformers
·2914 words·14 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Department of Computer Science, University of Chicago
LLMs’ fact retrieval is easily manipulated by context, highlighting their associative memory behavior; this paper studies this with transformers, showing how self-attention and value matrices support …
Do LLMs Build World Representations? Probing Through the Lens of State Abstraction
·2243 words·11 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Mila, McGill University
LLMs prioritize task completion over full world-state understanding by using goal-oriented abstractions.
DN-4DGS: Denoised Deformable Network with Temporal-Spatial Aggregation for Dynamic Scene Rendering
·2765 words·13 mins· loading · loading
Computer Vision 3D Vision 🏢 University of Science and Technology of China
DN-4DGS: Real-time dynamic scene rendering is revolutionized by a denoised deformable network with temporal-spatial aggregation, achieving state-of-the-art quality.