Posters
2024
OptEx: Expediting First-Order Optimization with Approximately Parallelized Iterations
·2570 words·13 mins·
loading
·
loading
AI Generated
Machine Learning
Optimization
🏢 School of Information Technology, Carleton University
OptEx significantly speeds up first-order optimization by cleverly parallelizing iterations, enabling faster convergence for complex tasks.
Opponent Modeling with In-context Search
·2301 words·11 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Tencent AI Lab
Opponent Modeling with In-context Search (OMIS) leverages in-context learning and decision-time search for stable and effective opponent adaptation in multi-agent environments.
Opponent Modeling based on Subgoal Inference
·2148 words·11 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Peking University
Opponent modeling based on subgoal inference (OMG) outperforms existing methods by inferring opponent subgoals, enabling better generalization to unseen opponents in multi-agent environments.
Operator World Models for Reinforcement Learning
·388 words·2 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 Istituto Italiano Di Tecnologia
POWR: a novel RL algorithm using operator world models and policy mirror descent achieves global convergence with improved sample efficiency.
OPERA: Automatic Offline Policy Evaluation with Re-weighted Aggregates of Multiple Estimators
·2594 words·13 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Stanford University
OPERA: A new algorithm intelligently blends multiple offline policy evaluation estimators for more accurate policy performance estimates.
OpenGaussian: Towards Point-Level 3D Gaussian-based Open Vocabulary Understanding
·2396 words·12 mins·
loading
·
loading
Computer Vision
3D Vision
🏢 Peking University
OpenGaussian achieves 3D point-level open vocabulary understanding using 3D Gaussian Splatting by training 3D instance features with high 3D consistency, employing a two-level codebook for feature dis…
OpenDlign: Open-World Point Cloud Understanding with Depth-Aligned Images
·2441 words·12 mins·
loading
·
loading
Computer Vision
3D Vision
🏢 Imperial College London
OpenDlign uses novel depth-aligned images from a diffusion model to boost open-world 3D understanding, achieving significant performance gains on diverse benchmarks.
Open-Vocabulary Object Detection via Language Hierarchy
·2960 words·14 mins·
loading
·
loading
Computer Vision
Object Detection
🏢 Nanyang Technological University
Language Hierarchical Self-training (LHST) enhances weakly-supervised object detection by integrating language hierarchy, mitigating label mismatch, and improving generalization across diverse dataset…
Open-Book Neural Algorithmic Reasoning
·1944 words·10 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 East China Normal University
This paper introduces open-book neural algorithmic reasoning, a novel framework that significantly enhances neural reasoning capabilities by allowing networks to access and utilize all training instan…
Open LLMs are Necessary for Current Private Adaptations and Outperform their Closed Alternatives
·2599 words·13 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 CISPA Helmholtz Center for Information Security
Open LLMs outperform closed alternatives for private data adaptation, offering superior privacy, performance, and lower costs.
OPEL: Optimal Transport Guided ProcedurE Learning
·2652 words·13 mins·
loading
·
loading
Computer Vision
Video Understanding
🏢 Purdue University
OPEL: a novel optimal transport framework for procedure learning, significantly outperforms SOTA methods by aligning similar video frames and relaxing strict temporal assumptions.
Only Strict Saddles in the Energy Landscape of Predictive Coding Networks?
·2012 words·10 mins·
loading
·
loading
AI Theory
Optimization
🏢 University of Sussex
Predictive coding networks learn faster than backpropagation by changing the loss landscape’s geometry, making saddles easier to escape and improving robustness to vanishing gradients.
OnlineTAS: An Online Baseline for Temporal Action Segmentation
·2736 words·13 mins·
loading
·
loading
AI Generated
Computer Vision
Video Understanding
🏢 National University of Singapore
OnlineTAS, a novel framework, achieves state-of-the-art performance in online temporal action segmentation by using an adaptive memory and a post-processing method to mitigate over-segmentation.
Online Weighted Paging with Unknown Weights
·1583 words·8 mins·
loading
·
loading
AI Theory
Optimization
🏢 Tel Aviv University
First algorithm for online weighted paging that learns page weights from samples, achieving optimal O(log k) competitiveness and sublinear regret.
Online Relational Inference for Evolving Multi-agent Interacting Systems
·2683 words·13 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 Georgia Institute of Technology
ORI: a novel online relational inference framework efficiently identifies hidden interaction graphs in evolving multi-agent systems using streaming data and real-time adaptation.
Online Posterior Sampling with a Diffusion Prior
·1905 words·9 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 Adobe Research
This paper introduces efficient approximate posterior sampling for contextual bandits using diffusion model priors, improving Thompson sampling’s performance and expressiveness.
Online Learning with Sublinear Best-Action Queries
·344 words·2 mins·
loading
·
loading
AI Generated
Machine Learning
Online Learning
🏢 Sapienza University of Rome
Boost online learning algorithms with sublinear best-action queries to achieve optimal regret!
Online Learning of Delayed Choices
·1433 words·7 mins·
loading
·
loading
AI Theory
Optimization
🏢 University of Waterloo
New algorithms conquer delayed feedback in online choice modeling, achieving optimal decision-making even with unknown customer preferences and delayed responses.
Online Iterative Reinforcement Learning from Human Feedback with General Preference Model
·1619 words·8 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
🏢 University of Illinois Urbana-Champaign
This paper proposes a novel, reward-free RLHF framework using a general preference oracle, surpassing existing reward-based approaches in efficiency and generalizability.
Online Feature Updates Improve Online (Generalized) Label Shift Adaptation
·1991 words·10 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
🏢 UC San Diego
Online Label Shift adaptation with Online Feature Updates (OLS-OFU) significantly boosts online label shift adaptation by dynamically refining feature extractors using self-supervised learning, achiev…