Posters
2024
Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging
·1775 words·9 mins·
loading
·
loading
Computer Vision
Image Generation
🏢 Rochester Institute of Technology
Federated Hardware-Prompt Learning (FedHP) enables robust cross-hardware SCI training by aligning inconsistent data distributions using a hardware-conditioned prompter, outperforming existing FL metho…
Cooperate or Collapse: Emergence of Sustainable Cooperation in a Society of LLM Agents
·6245 words·30 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 ETH Zurich
LLMs struggle to cooperate sustainably; GOVSIM reveals this, showing communication and ‘universalization’ reasoning improve outcomes.
Convolutions and More as Einsum: A Tensor Network Perspective with Advances for Second-Order Methods
·8742 words·42 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Vector Institute
This paper accelerates second-order optimization in CNNs by 4.5x, using a novel tensor network representation that simplifies convolutions and reduces memory overhead.
Convergence of No-Swap-Regret Dynamics in Self-Play
·1267 words·6 mins·
loading
·
loading
AI Theory
Optimization
🏢 Google Research
In symmetric zero-sum games, no-swap-regret dynamics guarantee strong convergence to Nash Equilibrium under symmetric initial conditions, but this advantage disappears when constraints are relaxed.
Convergence of $ ext{log}(1/psilon)$ for Gradient-Based Algorithms in Zero-Sum Games without the Condition Number: A Smoothed Analysis
·262 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 Carnegie Mellon University
Gradient-based methods for solving large zero-sum games achieve polynomial smoothed complexity, demonstrating efficiency even in high-precision scenarios without condition number dependence.
Convergence Analysis of Split Federated Learning on Heterogeneous Data
·2397 words·12 mins·
loading
·
loading
Machine Learning
Federated Learning
🏢 Guangdong University of Technology
Split Federated Learning (SFL) convergence is analyzed for heterogeneous data, achieving O(1/T) and O(1/√T) rates for strongly convex and general convex objectives respectively. The study also extend…
ControlSynth Neural ODEs: Modeling Dynamical Systems with Guaranteed Convergence
·2928 words·14 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Southeast University
ControlSynth Neural ODEs (CSODEs) guarantee convergence in complex dynamical systems via tractable linear inequalities, improving neural ODE modeling.
ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models
·3057 words·15 mins·
loading
·
loading
Multimodal Learning
Vision-Language Models
🏢 Key Laboratory of Multimedia Trusted Perception and Efficient Computing,Ministry of Education of China, Xiamen University
ControlMLLM: Inject visual prompts into MLLMs via learnable latent variable optimization for training-free referring abilities, supporting box, mask, scribble, and point prompts.
Controlling Multiple Errors Simultaneously with a PAC-Bayes Bound
·547 words·3 mins·
loading
·
loading
AI Generated
AI Theory
Generalization
🏢 University College London
New PAC-Bayes bound controls multiple error types simultaneously, providing richer generalization guarantees.
Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets
·2473 words·12 mins·
loading
·
loading
AI Theory
Causality
🏢 Max Planck Institute for Software Systems
AI decision support systems can unintentionally harm users; this paper introduces a novel framework to design systems that minimize this counterfactual harm, balancing accuracy and user well-being.
Controlling Continuous Relaxation for Combinatorial Optimization
·2228 words·11 mins·
loading
·
loading
Machine Learning
Unsupervised Learning
🏢 Fujitsu Limited
Continuous Relaxation Annealing (CRA) significantly boosts unsupervised learning-based solvers for combinatorial optimization by dynamically shifting from continuous to discrete solutions, eliminating…
Controlled maximal variability along with reliable performance in recurrent neural networks
·2025 words·10 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Universitat Pompeu Fabra
NeuroMOP, a novel neural principle, maximizes neural variability while ensuring reliable performance in recurrent neural networks, offering new insights into brain function and artificial intelligence…
Contrastive-Equivariant Self-Supervised Learning Improves Alignment with Primate Visual Area IT
·2007 words·10 mins·
loading
·
loading
Computer Vision
Self-Supervised Learning
🏢 Center for Neural Science, New York University
Self-supervised learning models can now better predict primate IT neural responses by preserving structured variability to input transformations, improving alignment with biological visual perception.
Contrastive losses as generalized models of global epistasis
·3227 words·16 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 Dyno Therapeutics
Contrastive losses unlock efficient fitness function modeling by leveraging the ranking information inherent in global epistasis, significantly improving accuracy and data efficiency in protein engine…
Contrastive dimension reduction: when and how?
·1931 words·10 mins·
loading
·
loading
Machine Learning
Dimensionality Reduction
🏢 University of North Carolina at Chapel Hill
This research introduces a hypothesis test and a contrastive dimension estimator to identify unique foreground information in contrastive datasets, advancing the field of dimension reduction.
Contrasting with Symile: Simple Model-Agnostic Representation Learning for Unlimited Modalities
·1891 words·9 mins·
loading
·
loading
AI Generated
Multimodal Learning
Vision-Language Models
🏢 New York University
Symile: A simple model-agnostic approach for learning representations from unlimited modalities, outperforming pairwise CLIP by capturing higher-order information.
CONTRAST: Continual Multi-source Adaptation to Dynamic Distributions
·2633 words·13 mins·
loading
·
loading
Machine Learning
Domain Adaptation
🏢 University of Michigan
CONTRAST efficiently adapts multiple source models to dynamic data distributions by optimally weighting models and selectively updating only the most relevant ones, achieving robust performance withou…
Contracting with a Learning Agent
·2554 words·12 mins·
loading
·
loading
AI Theory
Optimization
🏢 Google Research
Repeated contracts with learning agents are optimized by a simple dynamic contract: initially linear, then switching to zero-cost, causing the agent’s actions to ‘free-fall’ and yield non-zero rewards…
Continuously Learning, Adapting, and Improving: A Dual-Process Approach to Autonomous Driving
·3110 words·15 mins·
loading
·
loading
AI Applications
Autonomous Vehicles
🏢 Zhejiang University
LeapAD, a novel autonomous driving paradigm, uses a dual-process architecture mirroring human cognition to achieve continuous learning and improved adaptability. Employing a VLM for efficient scene u…
Continuous Temporal Domain Generalization
·2639 words·13 mins·
loading
·
loading
AI Generated
Machine Learning
Domain Generalization
🏢 University of Tokyo
Koodos: a novel Koopman operator-driven framework that tackles Continuous Temporal Domain Generalization (CTDG) by modeling continuous data dynamics and learning model evolution across irregular time …