Skip to main content

Posters

2024

Nearly Tight Black-Box Auditing of Differentially Private Machine Learning
·1819 words·9 mins· loading · loading
AI Theory Privacy 🏒 University College London
This paper presents a new auditing method for DP-SGD that provides substantially tighter black-box privacy analyses than previous methods, yielding significantly closer empirical estimates to theoreti…
Nearly Minimax Optimal Submodular Maximization with Bandit Feedback
·384 words·2 mins· loading · loading
AI Theory Optimization 🏒 University of Washington
This research establishes the first minimax optimal algorithm for submodular maximization with bandit feedback, achieving a regret bound matching the lower bound.
Nearly Minimax Optimal Regret for Multinomial Logistic Bandit
·1353 words·7 mins· loading · loading
AI Theory Optimization 🏒 Seoul National University
This paper presents OFU-MNL+, a constant-time algorithm achieving nearly minimax optimal regret for contextual multinomial logistic bandits, closing the gap between existing upper and lower bounds.
Nearest Neighbor Speculative Decoding for LLM Generation and Attribution
·2207 words·11 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏒 Cohere
NEST, a novel semi-parametric language model, significantly boosts LLM generation quality, provides accurate source attribution, and achieves a 1.8x speedup in inference time by cleverly incorporating…
Near-Optimality of Contrastive Divergence Algorithms
·280 words·2 mins· loading · loading
Machine Learning Unsupervised Learning 🏒 Gatsby Computational Neuroscience Unit, University College London
Contrastive Divergence algorithms achieve near-optimal parameter estimation rates, matching the CramΓ©r-Rao lower bound under specific conditions, as proven by a novel non-asymptotic analysis.
Near-Optimal Streaming Heavy-Tailed Statistical Estimation with Clipped SGD
·397 words·2 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Stanford University
Clipped SGD achieves near-optimal sub-Gaussian rates for high-dimensional heavy-tailed statistical estimation in streaming settings, improving upon existing state-of-the-art results.
Near-Optimal Dynamic Regret for Adversarial Linear Mixture MDPs
·308 words·2 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 National Key Laboratory for Novel Software Technology, Nanjing University, China
Near-optimal dynamic regret is achieved for adversarial linear mixture MDPs with unknown transitions, bridging occupancy-measure and policy-based methods for superior performance.
Near-Optimal Distributionally Robust Reinforcement Learning with General $L_p$ Norms
·556 words·3 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏒 Ecole Polytechnique
This paper presents near-optimal sample complexity bounds for solving distributionally robust reinforcement learning problems with general Lp norms, showing robust RL can be more sample-efficient than…
Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity
·1858 words·9 mins· loading · loading
AI Generated Machine Learning Optimization 🏒 School of Data Science, Fudan University
SVOGS: Near-optimal distributed minimax optimization is achieved under second-order similarity, balancing communication, computation, and achieving near-optimal complexities.
Near-Minimax-Optimal Distributional Reinforcement Learning with a Generative Model
·1906 words·9 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 Google DeepMind
New distributional RL algorithm (DCFP) achieves near-minimax optimality for return distribution estimation in the generative model regime.
Navigating the Safety Landscape: Measuring Risks in Finetuning Large Language Models
·3033 words·15 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏒 Georgia Tech
Researchers discover ‘safety basins’ in LLMs, proposing a new metric (VISAGE) to quantify finetuning risks and visualize how these basins protect against safety compromise during model training.
Navigating the Effect of Parametrization for Dimensionality Reduction
·3077 words·15 mins· loading · loading
Machine Learning Dimensionality Reduction 🏒 Duke University
ParamRepulsor, a novel parametric dimensionality reduction method, achieves state-of-the-art local structure preservation by mining hard negatives and using a tailored loss function.
Navigating Extremes: Dynamic Sparsity in Large Output Spaces
·2090 words·10 mins· loading · loading
Natural Language Processing Text Classification 🏒 Department of Computer Science, Aalto University
SPARTEX achieves memory-efficient extreme multi-label classification by integrating dynamic sparse training with an auxiliary loss function, enabling end-to-end training with millions of labels on com…
Navigating Chemical Space with Latent Flows
·2900 words·14 mins· loading · loading
Machine Learning Deep Learning 🏒 Cornell University
ChemFlow: a new framework efficiently explores chemical space using latent flows, unifying existing methods & incorporating physical priors for molecule manipulation and optimization.
Navigable Graphs for High-Dimensional Nearest Neighbor Search: Constructions and Limits
·495 words·3 mins· loading · loading
AI Generated AI Theory Optimization 🏒 New York University
Sparse navigable graphs enable efficient nearest neighbor search, but their construction and limits in high dimensions remain unclear. This paper presents an efficient method to construct navigable gr…
Nature-Inspired Local Propagation
·1601 words·8 mins· loading · loading
AI Theory Optimization 🏒 IMT School for Advanced Studies
Inspired by nature, researchers introduce a novel spatiotemporal local algorithm for machine learning that outperforms backpropagation in online learning scenarios with limited data or long video stre…
Natural Counterfactuals With Necessary Backtracking
·3858 words·19 mins· loading · loading
AI Generated AI Theory Causality 🏒 Chinese University of Hong Kong
This paper proposes ’natural counterfactuals’ for more realistic counterfactual reasoning in AI, using backtracking to minimize deviations from observed data while ensuring feasibility.
NaRCan: Natural Refined Canonical Image with Integration of Diffusion Prior for Video Editing
·2217 words·11 mins· loading · loading
Computer Vision Video Understanding 🏒 National Yang Ming Chiao Tung University
NaRCan: High-quality video editing via diffusion priors and hybrid deformation fields.
N-agent Ad Hoc Teamwork
·3605 words·17 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏒 University of Texas at Austin
New algorithm, POAM, excels at multi-agent cooperation by adapting to diverse and changing teammates in dynamic scenarios.
MVSplat360: Feed-Forward 360 Scene Synthesis from Sparse Views
·1997 words·10 mins· loading · loading
Computer Vision 3D Vision 🏒 Monash University
MVSplat360: Generating stunning 360Β° views from just a few images!