Skip to main content

Posters

2024

Time-Constrained Robust MDPs
·10005 words·47 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏒 IRT Saint-Exupéry
Time-Constrained Robust MDPs (TC-RMDPs) improve reinforcement learning by addressing limitations of traditional methods, offering a novel framework for handling real-world uncertainties and yielding m…
Time Makes Space: Emergence of Place Fields in Networks Encoding Temporally Continuous Sensory Experiences
·3838 words·19 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏒 University of Pennsylvania
Networks trained on continuous sensory data spontaneously develop place cell-like responses, demonstrating that time-encoded experience can create spatial maps in the brain.
Tighter Convergence Bounds for Shuffled SGD via Primal-Dual Perspective
·1717 words·9 mins· loading · loading
AI Generated AI Theory Optimization 🏒 University of Wisconsin-Madison
Shuffled SGD’s convergence is now better understood through a primal-dual analysis, yielding tighter bounds that align with its superior empirical performance.
Tight Rates for Bandit Control Beyond Quadratics
·406 words·2 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Princeton University
This paper presents an algorithm achieving Γ•(√T) optimal regret for bandit non-stochastic control with strongly-convex and smooth cost functions, overcoming prior limitations of suboptimal bounds.
Tight Bounds for Learning RUMs from Small Slates
·255 words·2 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Google Research
Learning user preferences accurately from limited data is key; this paper shows that surprisingly small datasets suffice for precise prediction, and provides efficient algorithms to achieve this.
Thought of Search: Planning with Language Models Through The Lens of Efficiency
·282 words·2 mins· loading · loading
Natural Language Processing Large Language Models 🏒 IBM Research
This paper introduces ‘Thought of Search,’ a novel, efficient planning approach using LLMs that prioritizes soundness and completeness. It leverages LLMs to generate Python code for search components,…
This Too Shall Pass: Removing Stale Observations in Dynamic Bayesian Optimization
·4337 words·21 mins· loading · loading
AI Generated Machine Learning Optimization 🏒 IC, EPFL
W-DBO efficiently tackles stale data in dynamic Bayesian Optimization by leveraging a novel Wasserstein distance-based criterion to remove irrelevant observations, maintaining high sampling frequency …
Thinking Forward: Memory-Efficient Federated Finetuning of Language Models
·4828 words·23 mins· loading · loading
Natural Language Processing Large Language Models 🏒 University of Massachusetts Amherst
SPRY: A memory-efficient federated learning algorithm for finetuning LLMs on resource-constrained devices, achieving high accuracy and speed.
Theoretical Investigations and Practical Enhancements on Tail Task Risk Minimization in Meta Learning
·3609 words·17 mins· loading · loading
AI Generated Machine Learning Meta Learning 🏒 College of Science, National University of Defense Technology
This research enhances meta-learning robustness by theoretically grounding and practically improving tail-risk minimization, achieving improved fast adaptation in the task space.
Theoretical guarantees in KL for Diffusion Flow Matching
·242 words·2 mins· loading · loading
AI Generated AI Theory Generalization 🏒 Γ‰cole Polytechnique
Novel theoretical guarantees for Diffusion Flow Matching (DFM) models are established, bounding the KL divergence under mild assumptions on data and base distributions.
Theoretical Foundations of Deep Selective State-Space Models
·379 words·2 mins· loading · loading
AI Theory Generalization 🏒 Imperial College London
Deep learning’s sequence modeling is revolutionized by selective state-space models (SSMs)! This paper provides theoretical grounding for their superior performance, revealing the crucial role of gati…
Theoretical Characterisation of the Gauss Newton Conditioning in Neural Networks
·2952 words·14 mins· loading · loading
AI Theory Optimization 🏒 University of Basel
New theoretical bounds reveal how neural network architecture impacts the Gauss-Newton matrix’s conditioning, paving the way for improved optimization.
Theoretical and Empirical Insights into the Origins of Degree Bias in Graph Neural Networks
·2828 words·14 mins· loading · loading
AI Theory Fairness 🏒 University of California, Los Angeles
Researchers unveil the origins of degree bias in Graph Neural Networks (GNNs), proving high-degree nodes’ lower misclassification probability and proposing methods to alleviate this bias for fairer GN…
Theoretical Analysis of Weak-to-Strong Generalization
·1703 words·8 mins· loading · loading
AI Theory Generalization 🏒 MIT CSAIL
Strong student models can learn from weaker teachers, even correcting errors and generalizing beyond the teacher’s expertise. This paper provides new theoretical bounds explaining this ‘weak-to-strong…
The Unmet Promise of Synthetic Training Images: Using Retrieved Real Images Performs Better
·2874 words·14 mins· loading · loading
AI Generated Computer Vision Image Classification 🏒 University of Washington
Using real images retrieved from a generator’s training data outperforms using synthetic images generated by that same model for image classification.
The tree autoencoder model, with application to hierarchical data visualization
·2243 words·11 mins· loading · loading
Machine Learning Unsupervised Learning 🏒 Dept. of Computer Science and Engineering, University of California, Merced
PCA tree: a novel hierarchical dimensionality reduction model visualized using oblique trees and local PCAs, offering speed and interpretability.
The Surprising Ineffectiveness of Pre-Trained Visual Representations for Model-Based Reinforcement Learning
·2250 words·11 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 Bosch Center for Artificial Intelligence
Contrary to expectations, pre-trained visual representations surprisingly don’t improve model-based reinforcement learning’s sample efficiency or generalization; data diversity and network architectu…
The surprising efficiency of temporal difference learning for rare event prediction
·1614 words·8 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 Courant Institute of Mathematical Sciences, New York University
TD learning surprisingly outperforms Monte Carlo methods for rare event prediction in Markov chains, achieving relative accuracy with polynomially, instead of exponentially, many observed transitions.
The Surprising Effectiveness of SP Voting with Partial Preferences
·3640 words·18 mins· loading · loading
AI Theory Optimization 🏒 Penn State University
Partial preferences and noisy votes hinder accurate ranking recovery; this paper introduces scalable SP voting variants, empirically demonstrating superior performance in recovering ground truth ranki…
The Star Geometry of Critic-Based Regularizer Learning
·1709 words·9 mins· loading · loading
Machine Learning Unsupervised Learning 🏒 University of California, Los Angeles
Star geometry reveals optimal data-driven regularizers!