🏢 UC San Diego
DiffuserLite: Towards Real-time Diffusion Planning
·1923 words·10 mins·
loading
·
loading
AI Applications
Robotics
🏢 UC San Diego
DiffuserLite: a super-fast diffusion planning framework achieving real-time performance (122Hz).
Differentially Private Reinforcement Learning with Self-Play
·347 words·2 mins·
loading
·
loading
AI Theory
Privacy
🏢 UC San Diego
This paper presents DP-Nash-VI, a novel algorithm ensuring trajectory-wise privacy in multi-agent reinforcement learning, achieving near-optimal regret bounds under both joint and local differential p…
Continuous Partitioning for Graph-Based Semi-Supervised Learning
·2240 words·11 mins·
loading
·
loading
Machine Learning
Semi-Supervised Learning
🏢 UC San Diego
CutSSL: a novel framework for graph-based semi-supervised learning, surpasses state-of-the-art accuracy by solving a continuous nonconvex quadratic program that provably yields integer solutions, exce…
Clustering with Non-adaptive Subset Queries
·407 words·2 mins·
loading
·
loading
Machine Learning
Unsupervised Learning
🏢 UC San Diego
This paper introduces novel non-adaptive algorithms for clustering using subset queries, achieving near-linear query complexity and improving upon existing limitations of pairwise query methods.
Average gradient outer product as a mechanism for deep neural collapse
·2027 words·10 mins·
loading
·
loading
AI Theory
Optimization
🏢 UC San Diego
Deep Neural Collapse (DNC) explained via Average Gradient Outer Product (AGOP).
Adapting Diffusion Models for Improved Prompt Compliance and Controllable Image Synthesis
·3442 words·17 mins·
loading
·
loading
Computer Vision
Image Generation
🏢 UC San Diego
FG-DMs revolutionize image synthesis by jointly modeling image and condition distributions, achieving higher object recall and enabling flexible editing.
Accelerating Transformers with Spectrum-Preserving Token Merging
·3201 words·16 mins·
loading
·
loading
Multimodal Learning
Vision-Language Models
🏢 UC San Diego
PITOME: a novel token merging method accelerates Transformers by 40-60% while preserving accuracy, prioritizing informative tokens via an energy score.