Skip to main content

Posters

2024

DMNet: Self-comparison Driven Model for Subject-independent Seizure Detection
·2151 words·11 mins· loading · loading
AI Applications Healthcare 🏢 Zhejiang University
DMNet: A novel self-comparison driven model significantly improves subject-independent seizure detection from intracranial EEG, outperforming existing methods.
DMesh: A Differentiable Mesh Representation
·3349 words·16 mins· loading · loading
Computer Vision 3D Vision 🏢 University of Maryland
DMesh: A novel differentiable mesh representation enabling efficient gradient-based optimization for diverse 3D shape applications.
DLAD: Improving Logits-based Detector without Logits from Black-box LLMs
·2559 words·13 mins· loading · loading
Natural Language Processing Large Language Models 🏢 MBZUAI
DALD: A novel framework for black-box LLM text detection, achieving state-of-the-art performance without relying on source model logits, by aligning surrogate model distributions.
Divide-and-Conquer Predictive Coding: a structured Bayesian inference algorithm
·1683 words·8 mins· loading · loading
AI Theory Representation Learning 🏢 Department of Psychology, Vanderbilt University
Divide-and-conquer predictive coding (DCPC) revolutionizes structured Bayesian inference by achieving superior performance in high-dimensional problems while remaining biologically plausible.
Divide-and-Conquer Posterior Sampling for Denoising Diffusion priors
·3064 words·15 mins· loading · loading
Machine Learning Deep Learning 🏢 CMAP, Ecole Polytechnique
Divide-and-Conquer Posterior Sampling (DCPS) efficiently samples complex posterior distributions from denoising diffusion models (DDMs) for Bayesian inverse problems, significantly improving accuracy …
Diversity Is Not All You Need: Training A Robust Cooperative Agent Needs Specialist Partners
·1922 words·10 mins· loading · loading
AI Theory Robustness 🏢 VISTEC
Training robust cooperative AI agents requires diverse and specialized training partners, but existing methods often produce overfit partners. This paper proposes novel methods using reinforcement and…
Divergences between Language Models and Human Brains
·2519 words·12 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Carnegie Mellon University
Language models struggle with social/emotional intelligence and physical commonsense, unlike human brains. Fine-tuning models on these aspects improves their brain response prediction accuracy.
DiTFastAttn: Attention Compression for Diffusion Transformer Models
·2788 words·14 mins· loading · loading
Computer Vision Image Generation 🏢 Tsinghua University
DiTFastAttn: A post-training compression method drastically speeds up diffusion transformer models by cleverly reducing redundancy in attention calculations, leading to up to a 1.8x speedup at high re…
DistrictNet: Decision-aware learning for geographical districting
·2460 words·12 mins· loading · loading
AI Theory Optimization 🏢 Polytechnique Montreal
DISTRICTNET: A novel decision-aware learning approach drastically cuts geographical districting costs by integrating combinatorial optimization and graph neural networks.
Distributionally Robust Reinforcement Learning with Interactive Data Collection: Fundamental Hardness and Near-Optimal Algorithms
·518 words·3 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 Stanford University
Provably sample-efficient robust RL via interactive data collection is achieved by introducing the vanishing minimal value assumption to mitigate the curse of support shift, enabling near-optimal algo…
Distributionally Robust Performative Prediction
·2341 words·11 mins· loading · loading
AI Generated AI Theory Optimization 🏢 University of Michigan
This research introduces distributionally robust performative prediction, offering a new solution concept (DRPO) that minimizes performative risk even with misspecified distribution maps, ensuring rob…
Distributional Successor Features Enable Zero-Shot Policy Optimization
·2834 words·14 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏢 University of Washington
DiSPOs: a novel model for zero-shot policy optimization in reinforcement learning, enabling quick adaptation to new tasks by learning a distribution of successor features and avoiding compounding erro…
Distributional Reinforcement Learning with Regularized Wasserstein Loss
·2196 words·11 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 University of Alberta
Sinkhorn distributional RL (SinkhornDRL) uses a regularized Wasserstein loss to improve distributional reinforcement learning.
Distributional regression: CRPS-error bounds for model fitting, model selection and convex aggregation
·348 words·2 mins· loading · loading
AI Theory Optimization 🏢 University of Franche-Comté
This paper provides the first statistical learning guarantees for distributional regression using CRPS, offering concentration bounds for model fitting, selection, and convex aggregation, applicable t…
Distributional Preference Alignment of LLMs via Optimal Transport
·2204 words·11 mins· loading · loading
Natural Language Processing Large Language Models 🏢 IBM Research
LLMs are aligned to human preferences distributionally using Optimal Transport, achieving state-of-the-art performance.
Distribution-Aware Data Expansion with Diffusion Models
·3351 words·16 mins· loading · loading
AI Generated Computer Vision Image Classification 🏢 Tsinghua University
DistDiff, a training-free data expansion framework, leverages distribution-aware diffusion models to generate high-fidelity, diverse samples that enhance downstream model performance.
Distribution Learning with Valid Outputs Beyond the Worst-Case
·320 words·2 mins· loading · loading
AI Theory Optimization 🏢 UC San Diego
Generative models often produce invalid outputs; this work shows that ensuring validity is easier than expected when using log-loss and carefully selecting model classes and data distributions.
Distribution Guidance Network for Weakly Supervised Point Cloud Semantic Segmentation
·2253 words·11 mins· loading · loading
Computer Vision 3D Vision 🏢 Peking University
DGNet enhances weakly supervised point cloud segmentation by aligning feature embeddings to a mixture of von Mises-Fisher distributions, achieving state-of-the-art performance.
Distributed Least Squares in Small Space via Sketching and Bias Reduction
·1322 words·7 mins· loading · loading
Machine Learning Optimization 🏢 University of Michigan
Researchers developed a novel sparse sketching method for distributed least squares regression, achieving near-unbiased estimates with optimal space and time complexity.
DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features
·2827 words·14 mins· loading · loading
AI Generated Computer Vision 3D Vision 🏢 NVIDIA Research
DistillNeRF: a self-supervised learning framework enabling accurate 3D scene reconstruction from sparse, single-frame images by cleverly distilling features from offline NeRFs and 2D foundation models…