🏢 UC Santa Barbara
T2V-Turbo: Breaking the Quality Bottleneck of Video Consistency Model with Mixed Reward Feedback
·3387 words·16 mins·
loading
·
loading
Multimodal Learning
Vision-Language Models
🏢 UC Santa Barbara
T2V-Turbo breaks the quality bottleneck of video consistency models by integrating mixed reward feedback during consistency distillation, enabling high-quality video generation with significantly fast…
Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity
·361 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 UC Santa Barbara
Stochastic zeroth-order optimization of strongly convex functions with Lipschitz Hessian achieves optimal sample complexity, as proven by matching upper and lower bounds with a novel two-stage algorit…
Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference
·2947 words·14 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 UC Santa Barbara
Reverse the forget-retain objectives for efficient LLM unlearning!
Nonparametric Classification on Low Dimensional Manifolds using Overparameterized Convolutional Residual Networks
·1457 words·7 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 UC Santa Barbara
Overparameterized ConvResNets surprisingly excel at prediction; this study proves they efficiently learn smooth functions on low-dimensional manifolds, avoiding the curse of dimensionality.
Learning Neural Contracting Dynamics: Extended Linearization and Global Guarantees
·1442 words·7 mins·
loading
·
loading
AI Theory
Robustness
🏢 UC Santa Barbara
ELCD: The first neural network guaranteeing globally contracting dynamics!
Global Distortions from Local Rewards: Neural Coding Strategies in Path-Integrating Neural Systems
·3589 words·17 mins·
loading
·
loading
AI Generated
AI Theory
Representation Learning
🏢 UC Santa Barbara
Reward-driven distortions in grid cell patterns are global, not local, preserving path integration while encoding environmental landmarks in spatial navigation.
Can Language Models Learn to Skip Steps?
·2929 words·14 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 UC Santa Barbara
Language models learn to skip steps in reasoning, improving efficiency and generalization, showcasing emergent human-like cognitive abilities.