Posters
2024
Improving Generalization and Convergence by Enhancing Implicit Regularization
·2134 words·11 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Peking University
IRE framework expedites the discovery of flat minima in deep learning, enhancing generalization and convergence. By decoupling the dynamics of flat and sharp directions, IRE boosts sharpness reduction…
Improving Equivariant Model Training via Constraint Relaxation
·1689 words·8 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 University of Pennsylvania
Boost equivariant model training by strategically relaxing constraints during training, enhancing optimization and generalization!
Improving Deep Reinforcement Learning by Reducing the Chain Effect of Value and Policy Churn
·3413 words·17 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Université De Montréal
Deep RL agents often suffer from instability due to the ‘chain effect’ of value and policy churn; this paper introduces CHAIN, a novel method to reduce this churn, thereby improving DRL performance an…
Improving Deep Learning Optimization through Constrained Parameter Regularization
·3522 words·17 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 University of Freiburg
Constrained Parameter Regularization (CPR) outperforms traditional weight decay by dynamically adapting regularization strengths for individual parameters, leading to better deep learning model perfor…
Improving Decision Sparsity
·4802 words·23 mins·
loading
·
loading
AI Generated
AI Theory
Interpretability
🏢 Duke University
Boosting machine learning model interpretability, this paper introduces cluster-based and tree-based Sparse Explanation Values (SEV) for generating more meaningful and credible explanations by optimiz…
Improving Context-Aware Preference Modeling for Language Models
·1939 words·10 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Microsoft Research
Context-aware preference modeling improves language model alignment by resolving ambiguity through a two-step process: context selection followed by context-specific preference evaluation. The approa…
Improving Alignment and Robustness with Circuit Breakers
·2515 words·12 mins·
loading
·
loading
AI Theory
Safety
🏢 Gray Swan AI
AI systems are made safer by ‘circuit breakers’ that directly control harmful internal representations, significantly improving alignment and robustness against adversarial attacks with minimal impact…
Improving Adversarial Robust Fairness via Anti-Bias Soft Label Distillation
·2396 words·12 mins·
loading
·
loading
AI Generated
AI Theory
Robustness
🏢 Institute of Artificial Intelligence, Beihang University
Boosting adversarial robustness fairness in deep neural networks, Anti-Bias Soft Label Distillation (ABSLD) adaptively adjusts soft label smoothness to reduce error gap between classes.
Improving Adaptivity via Over-Parameterization in Sequence Models
·2081 words·10 mins·
loading
·
loading
AI Generated
AI Theory
Generalization
🏢 Tsinghua University
Over-parameterized gradient descent dynamically adapts to signal structure, improving sequence model generalization and outperforming fixed-kernel methods.
Improved Sample Complexity for Multiclass PAC Learning
·258 words·2 mins·
loading
·
loading
Machine Learning
Optimization
🏢 Purdue University
This paper significantly improves our understanding of multiclass PAC learning by reducing the sample complexity gap and proposing two novel approaches to fully resolve the optimal sample complexity.
Improved Sample Complexity Bounds for Diffusion Model Training
·360 words·2 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 University of Texas at Austin
Training high-quality diffusion models efficiently is now possible, thanks to novel sample complexity bounds improving exponentially on previous work.
Improved Regret of Linear Ensemble Sampling
·1286 words·7 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 Seoul National University
Linear ensemble sampling achieves a state-of-the-art regret bound of Õ(d³/²√T) with a logarithmic ensemble size, closing the theory-practice gap in linear bandit algorithms.
Improved Regret for Bandit Convex Optimization with Delayed Feedback
·324 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 Zhejiang University
A novel algorithm, D-FTBL, achieves improved regret bounds for bandit convex optimization with delayed feedback, tightly matching existing lower bounds in worst-case scenarios.
Improved off-policy training of diffusion samplers
·2211 words·11 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 University of Toronto
Researchers enhanced diffusion samplers by developing a novel exploration strategy and a unified library, improving sample quality and addressing reproducibility challenges.
Improved learning rates in multi-unit uniform price auctions
·442 words·3 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 University of Oxford
New modeling of bid space in multi-unit uniform price auctions achieves regret of Õ(K4/3T2/3) under bandit feedback, improving over prior work and closing the gap with discriminatory pricing.
Improved Guarantees for Fully Dynamic $k$-Center Clustering with Outliers in General Metric Spaces
·1694 words·8 mins·
loading
·
loading
AI Theory
Optimization
🏢 Eindhoven University of Technology
A novel fully dynamic algorithm achieves a (4+ε)-approximate solution for the k-center clustering problem with outliers in general metric spaces, boasting an efficient update time.
Improved Generation of Adversarial Examples Against Safety-aligned LLMs
·2198 words·11 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
🏢 UC Davis
Researchers developed novel methods to improve the generation of adversarial examples against safety-aligned LLMs, achieving significantly higher attack success rates compared to existing techniques.
Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses
·2125 words·10 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Sea AI Lab
Improved few-shot jailbreaking techniques efficiently circumvent aligned language models and their defenses, achieving high success rates even against advanced protection methods.
Improved Bayes Regret Bounds for Multi-Task Hierarchical Bayesian Bandit Algorithms
·1596 words·8 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Hong Kong University of Science and Technology
This paper significantly improves Bayes regret bounds for hierarchical Bayesian bandit algorithms, achieving logarithmic regret in finite action settings and enhanced bounds in multi-task linear and c…
Improved Analysis for Bandit Learning in Matching Markets
·707 words·4 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 Shanghai Jiao Tong University
A new algorithm, AOGS, achieves significantly lower regret in two-sided matching markets by cleverly integrating exploration and exploitation, thus removing the dependence on the number of arms (K) in…