π’ University of Texas at Austin
Zero-Shot Transfer of Neural ODEs
·1748 words·9 mins·
loading
·
loading
AI Applications
Robotics
π’ University of Texas at Austin
Zero-shot Neural ODEs enable autonomous systems to rapidly adapt to unseen scenarios by learning a space of dynamical systems spanned by neural ODE basis functions, achieving efficient online adaptati…
YouDream: Generating Anatomically Controllable Consistent Text-to-3D Animals
·2987 words·15 mins·
loading
·
loading
Natural Language Processing
Text Generation
π’ University of Texas at Austin
YOUDREAM generates anatomically consistent, high-quality 3D animal models from text and 2D pose priors, pushing creative boundaries in text-to-3D generation.
Warped Diffusion: Solving Video Inverse Problems with Image Diffusion Models
·2837 words·14 mins·
loading
·
loading
Computer Vision
Image Generation
π’ University of Texas at Austin
Warped Diffusion cleverly adapts image diffusion models for video inverse problems, solving flickering and temporal inconsistency issues by viewing video frames as continuous warping transformations a…
Transfer Learning for Latent Variable Network Models
·1891 words·9 mins·
loading
·
loading
Machine Learning
Transfer Learning
π’ University of Texas at Austin
This paper presents efficient algorithms for transfer learning in latent variable network models, achieving vanishing error under specific conditions, and attaining minimax optimal rates for stochasti…
Synthesize, Partition, then Adapt: Eliciting Diverse Samples from Foundation Models
·1594 words·8 mins·
loading
·
loading
Natural Language Processing
Large Language Models
π’ University of Texas at Austin
The Synthesize-Partition-Adapt (SPA) framework leverages synthetic data to generate diverse, high-quality responses from foundation models, enriching user experience.
Symbolic Regression with a Learned Concept Library
·2112 words·10 mins·
loading
·
loading
Natural Language Processing
Large Language Models
π’ University of Texas at Austin
LASR, a novel symbolic regression method, uses zero-shot LLM queries to discover and evolve abstract concepts, substantially outperforming state-of-the-art approaches and discovering a new LLM scaling…
SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors
·2772 words·14 mins·
loading
·
loading
Natural Language Processing
Large Language Models
π’ University of Texas at Austin
SVFT: a novel parameter-efficient fine-tuning method achieves near full fine-tuning accuracy using only 0.006% to 0.25% of parameters, significantly outperforming existing techniques.
Stochastic Newton Proximal Extragradient Method
·1769 words·9 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ University of Texas at Austin
Stochastic Newton Proximal Extragradient (SNPE) achieves faster global and local convergence rates for strongly convex functions, improving upon existing stochastic Newton methods by requiring signifi…
SkiLD: Unsupervised Skill Discovery Guided by Factor Interactions
·2028 words·10 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
π’ University of Texas at Austin
SkiLD, a novel unsupervised skill discovery method, uses state factorization and a new objective function to learn skills inducing diverse interactions between state factors, outperforming existing me…
Sigmoid Gating is More Sample Efficient than Softmax Gating in Mixture of Experts
·1350 words·7 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ University of Texas at Austin
Sigmoid gating significantly boosts sample efficiency in Mixture of Experts models compared to softmax gating, offering faster convergence rates for various expert functions.
Quadratic Quantum Variational Monte Carlo
·1669 words·8 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Texas at Austin
Q2VMC, a novel quantum chemistry algorithm, drastically boosts the efficiency and accuracy of solving the SchrΓΆdinger equation using a quadratic update mechanism and neural network ansatzes.
Pseudo-Private Data Guided Model Inversion Attacks
·4550 words·22 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
π’ University of Texas at Austin
Pseudo-Private Data Guided Model Inversion (PPDG-MI) significantly improves model inversion attacks by dynamically tuning the generative model to increase the sampling probability of actual private da…
PPLNs: Parametric Piecewise Linear Networks for Event-Based Temporal Modeling and Beyond
·1984 words·10 mins·
loading
·
loading
Computer Vision
3D Vision
π’ University of Texas at Austin
Parametric Piecewise Linear Networks (PPLNs) achieve state-of-the-art results in event-based and frame-based computer vision tasks by mimicking biological neural principles.
PACE: Pacing Operator Learning to Accurate Optical Field Simulation for Complicated Photonic Devices
·2611 words·13 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ University of Texas at Austin
PACE, a novel neural operator, achieves unprecedented accuracy and speed in optical field simulation for complex photonic devices, surpassing existing methods by significantly reducing errors and boos…
Optimization Can Learn Johnson Lindenstrauss Embeddings
·412 words·2 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Texas at Austin
Optimization can learn optimal Johnson-Lindenstrauss embeddings, avoiding the limitations of randomized methods and achieving comparable theoretical guarantees.
Oja's Algorithm for Streaming Sparse PCA
·382 words·2 mins·
loading
·
loading
Machine Learning
Unsupervised Learning
π’ University of Texas at Austin
Oja’s algorithm achieves minimax optimal error rates for streaming sparse PCA using a simple single-pass thresholding method, requiring only O(d) space and O(nd) time.
Non-asymptotic Global Convergence Analysis of BFGS with the Armijo-Wolfe Line Search
·523 words·3 mins·
loading
·
loading
Optimization
π’ University of Texas at Austin
BFGS algorithm achieves global linear and superlinear convergence rates with inexact Armijo-Wolfe line search, even without precise Hessian knowledge.
Neural Cover Selection for Image Steganography
·3814 words·18 mins·
loading
·
loading
AI Generated
Computer Vision
Image Generation
π’ University of Texas at Austin
This study introduces a neural cover selection framework for image steganography, optimizing latent spaces in generative models to improve message recovery and image quality.
N-agent Ad Hoc Teamwork
·3605 words·17 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
π’ University of Texas at Austin
New algorithm, POAM, excels at multi-agent cooperation by adapting to diverse and changing teammates in dynamic scenarios.
Memory-Efficient LLM Training with Online Subspace Descent
·1794 words·9 mins·
loading
·
loading
Natural Language Processing
Large Language Models
π’ University of Texas at Austin
Online Subspace Descent: a novel memory-efficient LLM training algorithm guaranteed to converge, closing the performance gap with full-rank methods.