Posters
2024
Unrolled denoising networks provably learn to perform optimal Bayesian inference
·2411 words·12 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Harvard University
Unrolled neural networks, trained via gradient descent, provably achieve optimal Bayesian inference for compressed sensing, surpassing prior-aware counterparts.
Unravelling in Collaborative Learning
·214 words·2 mins·
loading
·
loading
Machine Learning
Federated Learning
π’ Γcole Polytechnique
Strategic data contributors with varying data quality can cause collaborative learning systems to ‘unravel’, but a novel probabilistic verification method effectively mitigates this, ensuring a stable…
Unraveling the Gradient Descent Dynamics of Transformers
·1273 words·6 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Minnesota, Twin Cities
This paper reveals how large embedding dimensions and appropriate initialization guarantee convergence in Transformer training, highlighting Gaussian attention’s superior landscape over Softmax.
Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback
·2380 words·12 mins·
loading
·
loading
Natural Language Processing
Large Language Models
π’ University of Washington
This study disentangles best practices for learning from preference feedback in LLMs, revealing that data quality, algorithm choice, and reward model significantly impact performance.
Unlocking the Potential of Global Human Expertise
·3551 words·17 mins·
loading
·
loading
AI Generated
AI Applications
Healthcare
π’ Cognizant AI Labs
AI can unlock the potential of global human expertise for solving complex problems by combining diverse expert solutions using an evolutionary framework, resulting in better and more effective strateg…
Unlocking the Capabilities of Masked Generative Models for Image Synthesis via Self-Guidance
·1792 words·9 mins·
loading
·
loading
Computer Vision
Image Generation
π’ KAIST
Self-guidance boosts masked generative models’ image synthesis, achieving superior quality and diversity with fewer steps!
Unlock the Intermittent Control Ability of Model Free Reinforcement Learning
·2548 words·12 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
π’ Tianjin University
MARS, a novel plugin framework, unlocks model-free RL’s intermittent control ability by encoding action sequences into a compact latent space, improving learning efficiency and real-world robotic task…
Unleashing the Potential of the Diffusion Model in Few-shot Semantic Segmentation
·2279 words·11 mins·
loading
·
loading
Computer Vision
Image Segmentation
π’ Zhejiang University
DiffewS: a novel framework leverages diffusion models for few-shot semantic segmentation, significantly outperforming existing methods in multiple settings.
Unleashing the Denoising Capability of Diffusion Prior for Solving Inverse Problems
·3236 words·16 mins·
loading
·
loading
Computer Vision
Image Generation
π’ Tsinghua University
ProjDiff: A novel algorithm unleashes diffusion models’ denoising power for superior inverse problem solutions.
Unleashing Region Understanding in Intermediate Layers for MLLM-based Referring Expression Generation
·2269 words·11 mins·
loading
·
loading
Natural Language Processing
Large Language Models
π’ Tsinghua Shenzhen International Graduate School
Unlocking intermediate layers in MLLMs improves referring expression generation by enhancing accuracy and detail while reducing hallucinations.
Unleashing Multispectral Video's Potential in Semantic Segmentation: A Semi-supervised Viewpoint and New UAV-View Benchmark
·2138 words·11 mins·
loading
·
loading
AI Generated
Computer Vision
Image Segmentation
π’ University of Alberta
New MVUAV dataset and SemiMV semi-supervised learning model significantly improve multispectral video semantic segmentation!
Unlearnable 3D Point Clouds: Class-wise Transformation Is All You Need
·3088 words·15 mins·
loading
·
loading
Computer Vision
3D Vision
π’ Huazhong University of Science and Technology
New unlearnable framework secures 3D point cloud data by using class-wise transformations, enabling authorized training while preventing unauthorized access.
Universality of AdaGrad Stepsizes for Stochastic Optimization: Inexact Oracle, Acceleration and Variance Reduction
·1717 words·9 mins·
loading
·
loading
AI Theory
Optimization
π’ CISPA
Adaptive gradient methods using AdaGrad stepsizes achieve optimal convergence rates for convex composite optimization problems, handling inexact oracles, acceleration, and variance reduction without n…
Universality in Transfer Learning for Linear Models
·1460 words·7 mins·
loading
·
loading
AI Generated
Machine Learning
Transfer Learning
π’ California Institute of Technology
Linear model transfer learning achieves universal generalization error improvements, depending only on first and second-order target statistics, and defying Gaussian assumptions.
Universal Sample Coding
·2065 words·10 mins·
loading
·
loading
AI Generated
Machine Learning
Federated Learning
π’ Imperial College London
Universal Sample Coding revolutionizes data transmission by reducing bits needed to communicate multiple samples from an unknown distribution, achieving significant improvements in federated learning …
Universal Rates for Active Learning
·321 words·2 mins·
loading
·
loading
Machine Learning
Active Learning
π’ Purdue University
Active learning’s optimal rates are completely characterized, resolving an open problem and providing new algorithms achieving exponential and sublinear rates depending on combinatorial complexity mea…
Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators
·2318 words·11 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ ELLIS Unit Linz
Universal Physics Transformers (UPTs) offer a unified, scalable framework for efficiently training neural operators across diverse spatio-temporal physics problems, overcoming limitations of existing …
Universal Online Convex Optimization with $1$ Projection per Round
·373 words·2 mins·
loading
·
loading
Machine Learning
Optimization
π’ Nanjing University
This paper introduces a novel universal online convex optimization algorithm needing only one projection per round, achieving optimal regret bounds for various function types, including general convex…
Universal Neural Functionals
·1439 words·7 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Stanford University
Universal Neural Functionals (UNFs) automatically construct permutation-equivariant models for any weight space, improving learned optimizer performance and generalization.
Universal In-Context Approximation By Prompting Fully Recurrent Models
·3295 words·16 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
π’ University of Oxford
Fully recurrent neural networks can be universal in-context approximators, achieving the same capabilities as transformer models by cleverly using prompts.