Skip to main content

Posters

2024

Bias in Motion: Theoretical Insights into the Dynamics of Bias in SGD Training
·2198 words·11 mins· loading · loading
AI Theory Fairness 🏒 University of Cambridge
AI systems acquire bias during training, impacting accuracy across sub-populations. This research unveils bias’s dynamic nature, revealing how classifier preferences shift over time, influenced by dat…
Bias Detection via Signaling
·295 words·2 mins· loading · loading
AI Theory Optimization 🏒 Harvard University
This paper presents efficient algorithms to detect whether an agent updates beliefs optimally (Bayesian) or exhibits bias towards their prior beliefs, using information design and signaling schemes.
Bias Amplification in Language Model Evolution: An Iterated Learning Perspective
·3378 words·16 mins· loading · loading
Natural Language Processing Large Language Models 🏒 UBC
LLMs’ iterative interactions amplify subtle biases; this paper uses a Bayesian Iterated Learning framework to explain this phenomenon and offers strategies to guide LLM evolution.
Beyond the Doors of Perception: Vision Transformers Represent Relations Between Objects
·9001 words·43 mins· loading · loading
AI Generated Computer Vision Visual Question Answering 🏒 Brown University
Vision transformers surprisingly struggle with visual relations; this study reveals ViTs use distinct perceptual and relational processing stages to solve same/different tasks, highlighting a previous…
Beyond task diversity: provable representation transfer for sequential multitask linear bandits
·1405 words·7 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 University of Arizona
Lifelong learning in linear bandits gets a boost! A new algorithm, BOSS, achieves low regret without the usual β€˜task diversity’ assumption, opening doors for more realistic sequential multi-task lear…
Beyond Slow Signs in High-fidelity Model Extraction
·2693 words·13 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏒 University of Cambridge
Researchers drastically sped up high-fidelity deep learning model extraction, improving efficiency by up to 14.8x and challenging previous assumptions on the extraction bottleneck.
Beyond Single Stationary Policies: Meta-Task Players as Naturally Superior Collaborators
·2377 words·12 mins· loading · loading
AI Applications Human-AI Interaction 🏒 MOE KLINNS Lab, Xi'an Jiaotong University
AI struggles with collaborating effectively with humans due to unpredictable human behavior. This paper introduces Collaborative Bayesian Policy Reuse (CBPR), a novel framework that leverages meta-ta…
Beyond Redundancy: Information-aware Unsupervised Multiplex Graph Structure Learning
·2041 words·10 mins· loading · loading
Machine Learning Unsupervised Learning 🏒 University of Electronic Science and Technology of China
InfoMGF, a novel framework, tackles the limitations of unsupervised multiplex graph learning by refining graph structures, maximizing task-relevant information (both shared and unique), and achieving …
Beyond Primal-Dual Methods in Bandits with Stochastic and Adversarial Constraints
·252 words·2 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Bocconi University
This paper presents a novel, UCB-like algorithm for bandits with stochastic and adversarial constraints, achieving optimal performance without the stringent assumptions of prior primal-dual methods.
Beyond Euclidean: Dual-Space Representation Learning for Weakly Supervised Video Violence Detection
·2466 words·12 mins· loading · loading
Computer Vision Video Understanding 🏒 Chongqing University of Posts and Telecommunications
Beyond Euclidean spaces, Dual-Space Representation Learning (DSRL) enhances weakly supervised video violence detection by cleverly integrating Euclidean and hyperbolic geometries for superior discrimi…
Beyond Efficiency: Molecular Data Pruning for Enhanced Generalization
·2607 words·13 mins· loading · loading
AI Generated Machine Learning Transfer Learning 🏒 Chinese Academy of Sciences
MolPeg, a novel molecular data pruning framework, enhances model generalization in transfer learning by using a source-free approach and consistently outperforming other methods, even surpassing full-…
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?
·3015 words·15 mins· loading · loading
AI Theory Interpretability 🏒 ETH Zurich
This paper presents a novel method to make black box neural networks intervenable using only a small validation set with concept labels, improving the effectiveness of concept-based interventions.
Beyond Accuracy: Tracking more like Human via Visual Search
·2966 words·14 mins· loading · loading
Computer Vision Video Understanding 🏒 School of Artificial Intelligence, University of Chinese Academy of Sciences
CPDTrack: Human-like Visual Search Boosts Object Tracking!
Beyond Accuracy: Ensuring Correct Predictions With Correct Rationales
·2877 words·14 mins· loading · loading
AI Generated Natural Language Processing Vision-Language Models 🏒 Department of Computer & Information Science, University of Delaware
This research introduces a novel two-phase approach to improve AI model trustworthiness by ensuring both correct predictions and correct rationales. A new dataset with structured rationales and a rat…
Beware of Road Markings: A New Adversarial Patch Attack to Monocular Depth Estimation
·2480 words·12 mins· loading · loading
AI Applications Autonomous Vehicles 🏒 Nanyang Technological University
Researchers developed AdvRM, a new adversarial patch attack against monocular depth estimation models, which effectively camouflages patches as road markings to mislead depth predictions for any obsta…
BetterDepth: Plug-and-Play Diffusion Refiner for Zero-Shot Monocular Depth Estimation
·3663 words·18 mins· loading · loading
Computer Vision 3D Vision 🏒 ETH Zurich
BetterDepth: A plug-and-play diffusion refiner boosts zero-shot monocular depth estimation by adding fine details while preserving accurate geometry.
Better by default: Strong pre-tuned MLPs and boosted trees on tabular data
·6742 words·32 mins· loading · loading
Machine Learning Deep Learning 🏒 Inria Paris
Strong pre-tuned MLPs and meta-tuned default parameters for GBDTs and MLPs improve tabular data classification and regression.
BERTs are Generative In-Context Learners
·2108 words·10 mins· loading · loading
Natural Language Processing Large Language Models 🏒 Language Technology Group, University of Oslo
Masked language models can perform in-context learning, challenging the dominance of causal models in this area.
BendVLM: Test-Time Debiasing of Vision-Language Embeddings
·2604 words·13 mins· loading · loading
Multimodal Learning Vision-Language Models 🏒 MIT
BEND-VLM: A novel, efficient test-time debiasing method for vision-language models, resolving bias without retraining.
BELM: Bidirectional Explicit Linear Multi-step Sampler for Exact Inversion in Diffusion Models
·3084 words·15 mins· loading · loading
AI Generated Computer Vision Image Generation 🏒 Zhejiang University
O-BELM, a novel diffusion model sampler, achieves mathematically exact inversion with superior sampling quality, offering a new gold standard for diffusion model applications.