Posters
2024
Exploring and Exploiting the Asymmetric Valley of Deep Neural Networks
·3483 words·17 mins·
loading
·
loading
Machine Learning
Federated Learning
🏢 Nanjing University
Deep neural network training reveals asymmetric loss valleys, impacting model fusion and federated learning; sign consistency between noise and convergence is key.
Exploring Adversarial Robustness of Deep State Space Models
·1844 words·9 mins·
loading
·
loading
AI Theory
Robustness
🏢 Tsinghua University
Deep state space models (SSMs) gain adversarial robustness through an adaptive scaling mechanism, improving performance without overfitting issues.
Exploratory Retrieval-Augmented Planning For Continual Embodied Instruction Following
·2661 words·13 mins·
loading
·
loading
Multimodal Learning
Embodied AI
🏢 Department of Computer Science and Engineering, Sungkyunkwan University
ExRAP: A novel framework boosts embodied AI’s continual instruction following by cleverly combining environment exploration with LLM-based planning, leading to significantly improved task success and …
Exploration by Learning Diverse Skills through Successor State Representations
·2767 words·13 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 ISAE-Supaero
LEADS: a novel algorithm learning diverse skills through successor state representations for robust exploration in reward-free environments.
Exploiting the Replay Memory Before Exploring the Environment: Enhancing Reinforcement Learning Through Empirical MDP Iteration
·2926 words·14 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Department of Computing Science and Amii, University of Alberta
Boost RL performance by solving a series of simplified MDPs before tackling the complex real-world one!
Exploiting Representation Curvature for Boundary Detection in Time Series
·2189 words·11 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
🏢 KAIST
RECURVE: A novel boundary detection method leveraging representation trajectory curvature, surpassing state-of-the-art techniques by accommodating both gradual and abrupt changes in time series.
Exploiting LLM Quantization
·1836 words·9 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 ETH Zurich
LLM quantization, while improving efficiency, creates a security risk: attackers can craft seemingly benign models that exhibit malicious behavior only when quantized.
Exploiting Descriptive Completeness Prior for Cross Modal Hashing with Incomplete Labels
·2505 words·12 mins·
loading
·
loading
Multimodal Learning
Cross-Modal Retrieval
🏢 Harbin Institute of Technology, Shenzhen
PCRIL, a novel prompt contrastive recovery approach, significantly boosts cross-modal hashing accuracy, especially when dealing with incomplete labels by progressively identifying promising positive c…
Exploiting Activation Sparsity with Dense to Dynamic-k Mixture-of-Experts Conversion
·2629 words·13 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Warsaw University of Technology
D2DMoE boosts Transformer efficiency by up to 60% via smart activation sparsity and dynamic expert selection, outperforming existing methods.
Explicit Eigenvalue Regularization Improves Sharpness-Aware Minimization
·2020 words·10 mins·
loading
·
loading
AI Theory
Generalization
🏢 Monash University
Eigen-SAM significantly boosts generalization in deep learning by directly addressing SAM’s limitations through explicit top Hessian eigenvalue regularization.
Explanations that reveal all through the definition of encoding
·1891 words·9 mins·
loading
·
loading
AI Theory
Interpretability
🏢 New York University
New method, STRIPE-X, powerfully detects ’encoding’ in AI explanations—a sneaky phenomenon where explanations predict outcomes better than their constituent parts alone would suggest.
Explaining Datasets in Words: Statistical Models with Natural Language Parameters
·2281 words·11 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 UC Berkeley
This paper introduces a model-agnostic algorithm that uses natural language predicates to make statistical model parameters directly interpretable, significantly improving explainability.
Expert-level protocol translation for self-driving labs
·2789 words·14 mins·
loading
·
loading
AI Applications
Manufacturing
🏢 Peking University
This research introduces a novel, automated protocol translation framework for self-driving labs, tackling the challenge of converting human-readable experimental protocols into machine-interpretable …
Expected Probabilistic Hierarchies
·4277 words·21 mins·
loading
·
loading
Machine Learning
Unsupervised Learning
🏢 Munich Data Science Institute
Expected Probabilistic Hierarchies (EPH) offers a novel, scalable approach to hierarchical clustering by optimizing expected scores under a probabilistic model, outperforming existing methods on vario…
Expectation Alignment: Handling Reward Misspecification in the Presence of Expectation Mismatch
·527 words·3 mins·
loading
·
loading
AI Generated
AI Theory
Safety
🏢 Colorado State University
This paper introduces Expectation Alignment (EAL), a novel framework and interactive algorithm to address reward misspecification in AI, aligning AI behavior with user expectations.
Expanding Sparse Tuning for Low Memory Usage
·2517 words·12 mins·
loading
·
loading
Computer Vision
Transfer Learning
🏢 Tsinghua University
SNELL: Sparse tuning with kerNElized LoRA achieves state-of-the-art parameter-efficient fine-tuning performance with drastically reduced memory usage.
Exogenous Matching: Learning Good Proposals for Tractable Counterfactual Estimation
·2755 words·13 mins·
loading
·
loading
AI Theory
Causality
🏢 Shanghai Key Laboratory of Trustworthy Computing, East China Normal University
Exogenous Matching learns optimal proposals for efficient counterfactual estimation by transforming variance minimization into conditional distribution learning, outperforming existing methods.
Exocentric-to-Egocentric Video Generation
·2698 words·13 mins·
loading
·
loading
AI Generated
Computer Vision
Video Understanding
🏢 National University of Singapore
Exo2Ego-V generates realistic egocentric videos from sparse exocentric views, significantly outperforming state-of-the-art methods on a challenging benchmark.
Excluding the Irrelevant: Focusing Reinforcement Learning through Continuous Action Masking
·2172 words·11 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Technical University of Munich
Boost RL efficiency in continuous action spaces by masking irrelevant actions using three novel continuous action masking methods!
Exactly Minimax-Optimal Locally Differentially Private Sampling
·1615 words·8 mins·
loading
·
loading
AI Theory
Privacy
🏢 KAIST
This paper provides the first exact minimax-optimal mechanisms for locally differentially private sampling, applicable across all f-divergences.