Posters
2024
The Dormant Neuron Phenomenon in Multi-Agent Reinforcement Learning Value Factorization
·2766 words·13 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Xiamen University
ReBorn revitalizes multi-agent reinforcement learning by tackling dormant neurons, boosting network expressivity and learning efficiency.
The Closeness of In-Context Learning and Weight Shifting for Softmax Regression
·2475 words·12 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Shanghai Jiao Tong University
Softmax regression reveals in-context learning’s surprising similarity to gradient descent in self-attention Transformers, showing the models’ remarkable learning capabilities.
The Challenges of the Nonlinear Regime for Physics-Informed Neural Networks
·2142 words·11 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 BMW AG
Physics-Informed Neural Networks (PINNs) training dynamics for nonlinear PDEs are fundamentally different than linear ones; this paper reveals why using second-order methods is crucial for solving non…
The Best of Both Worlds: On the Dilemma of Out-of-distribution Detection
·2465 words·12 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Tencent AI Lab
Researchers found that superior OOD detection performance comes at the cost of reduced generalization. Their novel Decoupled Uncertainty Learning (DUL) algorithm harmonizes OOD detection and generali…
The Benefits of Balance: From Information Projections to Variance Reduction
·1859 words·9 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
🏢 University of Washington
Data balancing in foundation models surprisingly reduces variance, improving model training and performance.
The Bayesian sampling in a canonical recurrent circuit with a diversity of inhibitory interneurons
·1541 words·8 mins·
loading
·
loading
AI Theory
Optimization
🏢 UT Southwestern Medical Center
Diverse inhibitory neurons in brain circuits enable faster Bayesian computation via Hamiltonian sampling.
TFS-NeRF: Template-Free NeRF for Semantic 3D Reconstruction of Dynamic Scene
·2695 words·13 mins·
loading
·
loading
AI Generated
Computer Vision
3D Vision
🏢 Faculty of IT, Monash University
TFS-NeRF: A template-free neural radiance field efficiently reconstructs semantically separable 3D geometries of dynamic scenes featuring multiple interacting entities from sparse RGB videos.
TFGDA: Exploring Topology and Feature Alignment in Semi-supervised Graph Domain Adaptation through Robust Clustering
·1822 words·9 mins·
loading
·
loading
Machine Learning
Transfer Learning
🏢 Zhejiang University
TFGDA: Leveraging graph topology and feature alignment for superior semi-supervised domain adaptation.
Textual Training for the Hassle-Free Removal of Unwanted Visual Data: Case Studies on OOD and Hateful Image Detection
·2145 words·11 mins·
loading
·
loading
Multimodal Learning
Vision-Language Models
🏢 Seoul National University
Hassle-Free Textual Training (HFTT) uses only textual data to effectively remove unwanted visual data from AI training datasets, significantly reducing human annotation needs.
Text2NKG: Fine-Grained N-ary Relation Extraction for N-ary relational Knowledge Graph Construction
·2178 words·11 mins·
loading
·
loading
AI Generated
Natural Language Processing
Information Extraction
🏢 School of Computer Science, Beijing University of Posts and Telecommunications, China
Text2NKG: a novel framework for building N-ary relational knowledge graphs by performing fine-grained n-ary relation extraction, supporting multiple schemas, and achieving state-of-the-art accuracy.
Text-Infused Attention and Foreground-Aware Modeling for Zero-Shot Temporal Action Detection
·2535 words·12 mins·
loading
·
loading
Multimodal Learning
Vision-Language Models
🏢 Dept. of Artificial Intelligence, Korea University
Ti-FAD: a novel zero-shot temporal action detection model outperforms state-of-the-art methods by enhancing text-related visual focus and foreground awareness.
Text-Guided Attention is All You Need for Zero-Shot Robustness in Vision-Language Models
·2675 words·13 mins·
loading
·
loading
Multimodal Learning
Vision-Language Models
🏢 School of Computer Science and Engineering, Tianjin University of Technology
Text-Guided Attention for Zero-Shot Robustness (TGA-ZSR) significantly improves vision-language model robustness against adversarial attacks by aligning and constraining text-guided attention, achievi…
Text-Aware Diffusion for Policy Learning
·3340 words·16 mins·
loading
·
loading
Multimodal Learning
Vision-Language Models
🏢 Brown University
Text-Aware Diffusion for Policy Learning (TADPoLe) uses pretrained diffusion models for zero-shot reward generation, enabling natural language-driven policy learning without manual reward design.
Testing Semantic Importance via Betting
·4904 words·24 mins·
loading
·
loading
AI Generated
Multimodal Learning
Vision-Language Models
🏢 Johns Hopkins University
This work presents statistically grounded methods to rank semantic concept importance in black-box models, using conditional independence testing for both global and local interpretations.
Testing Calibration in Nearly-Linear Time
·1823 words·9 mins·
loading
·
loading
AI Generated
AI Theory
Interpretability
🏢 Harvard University
This paper presents nearly-linear time algorithms for testing model calibration, improving upon existing methods and providing theoretical lower bounds for various calibration measures.
Testably Learning Polynomial Threshold Functions
·248 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Generalization
🏢 ETH Zurich
Testably learning polynomial threshold functions efficiently, matching agnostic learning’s best guarantees, is achieved, solving a key problem in robust machine learning.
Test-Time Dynamic Image Fusion
·3589 words·17 mins·
loading
·
loading
AI Generated
Computer Vision
Image Fusion
🏢 Tianjin University
Test-Time Dynamic Image Fusion (TTD) paradigm provably improves image fusion by dynamically weighting source data based on their relative dominance, reducing generalization error without extra trainin…
Test-Time Adaptation Induces Stronger Accuracy and Agreement-on-the-Line
·2874 words·14 mins·
loading
·
loading
Machine Learning
Few-Shot Learning
🏢 Carnegie Mellon University
Test-time adaptation strengthens the linear correlation between in- and out-of-distribution accuracy, enabling precise OOD performance prediction and hyperparameter optimization without labeled OOD da…
Test-time Adaptation in Non-stationary Environments via Adaptive Representation Alignment
·2451 words·12 mins·
loading
·
loading
AI Generated
Machine Learning
Representation Learning
🏢 Stanford University
Ada-ReAlign: a novel algorithm for continual test-time adaptation that leverages non-stationary representation learning to effectively align unlabeled data streams with source data, enhancing model ad…
Test Where Decisions Matter: Importance-driven Testing for Deep Reinforcement Learning
·3658 words·18 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 Graz University of Technology
Prioritize crucial decisions in deep RL policy testing with a novel model-based method for rigorous state importance ranking, enabling efficient safety and performance verification.