Skip to main content

Posters

2024

Strategic Linear Contextual Bandits
·1349 words·7 mins· loading · loading
AI Theory Optimization 🏢 Alan Turing Institute
Strategic agents gaming recommender systems is solved by a novel mechanism that incentivizes truthful behavior while minimizing regret, offering a solution to a key challenge in online learning.
Stopping Bayesian Optimization with Probabilistic Regret Bounds
·3802 words·18 mins· loading · loading
Machine Learning Optimization 🏢 Morgan Stanley
This paper presents a novel probabilistic regret bound (PRB) framework for Bayesian optimization, replacing the traditional fixed-budget stopping rule with a criterion based on the probability of find…
STONE: A Submodular Optimization Framework for Active 3D Object Detection
·2151 words·11 mins· loading · loading
AI Generated Computer Vision 3D Vision 🏢 University of Texas at Dallas
STONE: A novel submodular optimization framework drastically cuts 3D object detection training costs by cleverly selecting the most informative LiDAR point cloud data for labeling, achieving state-of-…
Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity
·361 words·2 mins· loading · loading
AI Theory Optimization 🏢 UC Santa Barbara
Stochastic zeroth-order optimization of strongly convex functions with Lipschitz Hessian achieves optimal sample complexity, as proven by matching upper and lower bounds with a novel two-stage algorit…
Stochastic Optimization Schemes for Performative Prediction with Nonconvex Loss
·2232 words·11 mins· loading · loading
AI Generated Machine Learning Optimization 🏢 Chinese University of Hong Kong
Bias-free performative prediction is achieved using a novel lazy deployment scheme with SGD, handling non-convex loss functions.
Stochastic Optimization Algorithms for Instrumental Variable Regression with Streaming Data
·1722 words·9 mins· loading · loading
AI Theory Causality 🏢 UC Davis
New streaming algorithms for instrumental variable regression achieve fast convergence rates, solving the problem efficiently without matrix inversions or mini-batches, enabling real-time causal analy…
Stochastic Optimal Control Matching
·1801 words·9 mins· loading · loading
AI Theory Optimization 🏢 Meta AI
Stochastic Optimal Control Matching (SOCM) significantly reduces errors in stochastic optimal control by learning a matching vector field using a novel iterative diffusion optimization technique.
Stochastic Optimal Control for Diffusion Bridges in Function Spaces
·2194 words·11 mins· loading · loading
Machine Learning Deep Learning 🏢 KAIST
Researchers extended stochastic optimal control theory to infinite-dimensional spaces, enabling the creation of diffusion bridges for generative modeling in function spaces, demonstrating applications…
Stochastic Optimal Control and Estimation with Multiplicative and Internal Noise
·2159 words·11 mins· loading · loading
AI Theory Optimization 🏢 Pompeu Fabra University
A novel algorithm significantly improves stochastic optimal control by accurately modeling sensorimotor noise, achieving substantially lower costs than current state-of-the-art solutions, particularly…
Stochastic Newton Proximal Extragradient Method
·1769 words·9 mins· loading · loading
AI Generated AI Theory Optimization 🏢 University of Texas at Austin
Stochastic Newton Proximal Extragradient (SNPE) achieves faster global and local convergence rates for strongly convex functions, improving upon existing stochastic Newton methods by requiring signifi…
Stochastic Kernel Regularisation Improves Generalisation in Deep Kernel Machines
·1434 words·7 mins· loading · loading
Machine Learning Deep Learning 🏢 University of Bristol
Deep kernel machines now achieve 94.5% accuracy on CIFAR-10, matching neural networks, by using stochastic kernel regularization to improve generalization.
Stochastic Extragradient with Flip-Flop Shuffling & Anchoring: Provable Improvements
·1705 words·9 mins· loading · loading
AI Generated AI Theory Optimization 🏢 KAIST
Stochastic extragradient with flip-flop shuffling & anchoring achieves provably faster convergence in minimax optimization.
Stochastic contextual bandits with graph feedback: from independence number to MAS number
·289 words·2 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 New York University
Contextual bandits with graph feedback achieve near-optimal regret by leveraging a novel graph-theoretic quantity that interpolates between independence and maximum acyclic subgraph numbers, depending…
Stochastic Concept Bottleneck Models
·2532 words·12 mins· loading · loading
AI Generated AI Theory Interpretability 🏢 ETH Zurich
Stochastic Concept Bottleneck Models (SCBMs) revolutionize interpretable ML by efficiently modeling concept dependencies, drastically improving intervention effectiveness and enabling CLIP-based conce…
Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution
·2842 words·14 mins· loading · loading
AI Theory Interpretability 🏢 Stanford University
Stochastic Amortization accelerates feature and data attribution by training amortized models using noisy, yet unbiased, labels, achieving order-of-magnitude speedups over existing methods.
STL: Still Tricky Logic (for System Validation, Even When Showing Your Work)
·1760 words·9 mins· loading · loading
AI Applications Robotics 🏢 MIT
Human understanding of formal specifications for robot validation is surprisingly poor; active learning, while improving engagement, doesn’t significantly boost accuracy.
Stepwise Alignment for Constrained Language Model Policy Optimization
·2517 words·12 mins· loading · loading
AI Theory Safety 🏢 University of Tsukuba
Stepwise Alignment for Constrained Policy Optimization (SACPO) efficiently aligns LLMs with human values, prioritizing both helpfulness and harmlessness via a novel stepwise approach.
Stepping on the Edge: Curvature Aware Learning Rate Tuners
·2482 words·12 mins· loading · loading
Machine Learning Deep Learning 🏢 Google DeepMind
Adaptive learning rate tuners often underperform; Curvature Dynamics Aware Tuning (CDAT) prioritizes long-term curvature stabilization, outperforming tuned constant learning rates.
Stepping Forward on the Last Mile
·2832 words·14 mins· loading · loading
Machine Learning Few-Shot Learning 🏢 Qualcomm AI Research
On-device training with fixed-point forward gradients enables efficient model personalization on resource-constrained edge devices, overcoming backpropagation’s memory limitations.
StepbaQ: Stepping backward as Correction for Quantized Diffusion Models
·2381 words·12 mins· loading · loading
AI Generated Computer Vision Image Generation 🏢 MediaTek
StepbaQ enhances quantized diffusion models by correcting accumulated quantization errors via a novel sampling step correction mechanism, significantly improving model accuracy without modifying exist…