Posters
2024
Overcoming Brittleness in Pareto-Optimal Learning Augmented Algorithms
·1976 words·10 mins·
loading
·
loading
AI Theory
Optimization
🏢 Sorbonne University
This research introduces a novel framework that overcomes the brittleness of Pareto-optimal learning-augmented algorithms by enforcing smoothness in performance using user-specified profiles and devel…
Over-parameterized Student Model via Tensor Decomposition Boosted Knowledge Distillation
·2892 words·14 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
🏢 Gaoling School of Artificial Intelligence, Renmin University of China
Over-parameterized Distillation Framework (OPDF) boosts knowledge distillation by efficiently over-parameterizing student models via tensor decomposition, significantly improving performance without i…
Outlier-Robust Distributionally Robust Optimization via Unbalanced Optimal Transport
·2832 words·14 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 KTH Royal Institute of Technology
Outlier-robust distributionally robust optimization achieved via a novel Unbalanced Optimal Transport (UOT) distance, improving efficiency and accuracy.
Out-Of-Distribution Detection with Diversification (Provably)
·1846 words·9 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 College of Intelligence and Computing, Tianjin University
Boost OOD detection accuracy with diverseMix: a novel method enhancing auxiliary outlier diversity, provably improving generalization and achieving state-of-the-art results.
Out-of-Distribution Detection with a Single Unconditional Diffusion Model
·2009 words·10 mins·
loading
·
loading
Machine Learning
Unsupervised Learning
🏢 Department of Computer Science, National University of Singapore
Single diffusion model achieves competitive out-of-distribution detection across diverse tasks by analyzing diffusion path characteristics.
OTTER: Effortless Label Distribution Adaptation of Zero-shot Models
·2811 words·14 mins·
loading
·
loading
Machine Learning
Few-Shot Learning
🏢 Department of Computer Sciences University of Wisconsin-Madison
OTTER effortlessly adapts zero-shot models to new tasks by adjusting predictions using optimal transport, improving accuracy significantly without extra training data.
OT4P: Unlocking Effective Orthogonal Group Path for Permutation Relaxation
·2531 words·12 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 School of Artificial Intelligence, Jilin University
OT4P: a novel temperature-controlled differentiable transformation efficiently relaxes permutation matrices onto the orthogonal group for gradient-based optimization.
OSLO: One-Shot Label-Only Membership Inference Attacks
·2719 words·13 mins·
loading
·
loading
AI Theory
Privacy
🏢 University of Massachusetts Amherst
One-shot label-only attack (OSLO) achieves high membership inference accuracy with only one query, surpassing existing methods by a large margin.
Ordering-Based Causal Discovery for Linear and Nonlinear Relations
·2689 words·13 mins·
loading
·
loading
AI Generated
AI Theory
Causality
🏢 Central South University
Causal discovery algorithm CaPS efficiently handles mixed linear and nonlinear relationships in observational data, outperforming existing methods on synthetic and real-world datasets.
Ordered Momentum for Asynchronous SGD
·2380 words·12 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 National Key Laboratory for Novel Software Technology, School of Computer Science, Nanjing University
Ordered Momentum (OrMo) significantly boosts asynchronous stochastic gradient descent (ASGD) convergence by cleverly incorporating momentum, resolving prior convergence issues. This novel approach is…
Order-Independence Without Fine Tuning
·1791 words·9 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Harvard University
Set-Based Prompting guarantees order-independent LLM outputs by modifying input representations, eliminating unwanted inconsistencies without fine-tuning.
Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling
·1896 words·9 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Google Research
Orchid: a novel deep learning architecture using data-dependent convolution achieves quasilinear scalability and outperforms attention-based models on various sequence modeling tasks.
Oracle-Efficient Reinforcement Learning for Max Value Ensembles
·1715 words·9 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 University of Pennsylvania
Boost RL performance in large state spaces by efficiently learning a policy competitive with the best combination of existing base policies!
Oracle-Efficient Differentially Private Learning with Public Data
·293 words·2 mins·
loading
·
loading
AI Theory
Privacy
🏢 MIT
This paper introduces computationally efficient algorithms for differentially private learning by leveraging public data, overcoming previous computational limitations and enabling broader practical a…
OPUS: Occupancy Prediction Using a Sparse Set
·2458 words·12 mins·
loading
·
loading
Computer Vision
3D Vision
🏢 Nankai University
OPUS: a novel, real-time occupancy prediction framework using a sparse set prediction paradigm, outperforms state-of-the-art methods on Occ3D-nuScenes.
Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks
·3223 words·16 mins·
loading
·
loading
AI Applications
Gaming
🏢 Harbin Institute of Technology, Shenzhen
Optimus-1: Hybrid Multimodal Memory empowers AI agents to excel in complex, long-horizon tasks by integrating hierarchical knowledge graphs and multimodal experience for superior planning and reflecti…
Optimizing the coalition gain in Online Auctions with Greedy Structured Bandits
·1842 words·9 mins·
loading
·
loading
AI Theory
Optimization
🏢 Department of Statistics, University of Oxford
Two novel algorithms, Local-Greedy and Greedy-Grid, optimize coalition gain in online auctions with limited observations, achieving constant regret and problem-independent guarantees while respecting …
Optimizing over Multiple Distributions under Generalized Quasar-Convexity Condition
·331 words·2 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Peking University
This paper proposes ‘generalized quasar-convexity’ to optimize problems with multiple probability distributions, offering adaptive algorithms with superior iteration complexities compared to existing …
Optimized Feature Generation for Tabular Data via LLMs with Decision Tree Reasoning
·4495 words·22 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
🏢 KAIST
LLMs boost tabular data prediction by generating optimized features via decision tree reasoning, outperforming existing methods.
Optimization Can Learn Johnson Lindenstrauss Embeddings
·412 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 University of Texas at Austin
Optimization can learn optimal Johnson-Lindenstrauss embeddings, avoiding the limitations of randomized methods and achieving comparable theoretical guarantees.