Skip to main content

🏢 Stanford University

AvaTaR: Optimizing LLM Agents for Tool Usage via Contrastive Reasoning
·3104 words·15 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏢 Stanford University
AVATAR: A novel automated framework optimizes LLM agents for effective tool usage via contrastive reasoning, significantly boosting performance on complex tasks.
Automatic Outlier Rectification via Optimal Transport
·2826 words·14 mins· loading · loading
AI Theory Optimization 🏢 Stanford University
This study presents a novel single-step outlier rectification method using optimal transport with a concave cost function, surpassing the limitations of conventional two-stage approaches by jointly op…
Are More LLM Calls All You Need? Towards the Scaling Properties of Compound AI Systems
·1725 words·9 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Stanford University
More LM calls don’t always mean better results for compound AI; this study reveals performance can initially increase then decrease, highlighting the importance of optimal call number prediction.
An Efficient High-dimensional Gradient Estimator for Stochastic Differential Equations
·1548 words·8 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Stanford University
New unbiased gradient estimator for high-dimensional SDEs drastically reduces computation time without sacrificing estimation accuracy.
Aligning Target-Aware Molecule Diffusion Models with Exact Energy Optimization
·2379 words·12 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 Stanford University
ALIDIFF aligns target-aware molecule diffusion models with exact energy optimization, generating molecules with state-of-the-art binding energies and improved properties.
Aligning Model Properties via Conformal Risk Control
·1981 words·10 mins· loading · loading
AI Generated AI Theory Safety 🏢 Stanford University
Post-processing pre-trained models for alignment using conformal risk control and property testing guarantees better alignment, even when training data is biased.
Adaptive Sampling for Efficient Softmax Approximation
·1972 words·10 mins· loading · loading
Machine Learning Optimization 🏢 Stanford University
AdaptiveSoftmax: Achieve 10x+ speedup in softmax computation via adaptive sampling!
ActSort: An active-learning accelerated cell sorting algorithm for large-scale calcium imaging datasets
·2928 words·14 mins· loading · loading
Machine Learning Active Learning 🏢 Stanford University
ActSort: Active learning dramatically accelerates cell sorting in massive calcium imaging datasets, minimizing human effort and improving accuracy.
Active Learning for Derivative-Based Global Sensitivity Analysis with Gaussian Processes
·3450 words·17 mins· loading · loading
AI Generated Machine Learning Active Learning 🏢 Stanford University
Boost global sensitivity analysis efficiency by 10x with novel active learning methods targeting derivative-based measures for expensive black-box functions!
ActAnywhere: Subject-Aware Video Background Generation
·1990 words·10 mins· loading · loading
Computer Vision Video Understanding 🏢 Stanford University
ActAnywhere, a novel video diffusion model, seamlessly integrates foreground subjects into new backgrounds by generating realistic video backgrounds tailored to subject motion, significantly reducing …
Accelerating Diffusion Models with Parallel Sampling: Inference at Sub-Linear Time Complexity
·418 words·2 mins· loading · loading
🏢 Stanford University
Researchers achieve sub-linear time complexity for diffusion model inference using parallel sampling with poly-logarithmic time complexity.
Accelerated Regularized Learning in Finite N-Person Games
·1352 words·7 mins· loading · loading
AI Theory Optimization 🏢 Stanford University
Accelerated learning in games achieved! FTXL algorithm exponentially speeds up convergence to Nash equilibria in finite N-person games, even under limited feedback.
A Critical Evaluation of AI Feedback for Aligning Large Language Models
·2724 words·13 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏢 Stanford University
Contrary to popular belief, simple supervised fine-tuning with strong language models outperforms complex reinforcement learning in aligning large language models, significantly improving efficiency.