🏢 Stanford University
ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Field
·1770 words·9 mins·
loading
·
loading
Computer Vision
3D Vision
🏢 Stanford University
ProvNeRF enhances NeRF reconstruction by modeling per-point provenance as a stochastic field, improving novel view synthesis and uncertainty estimation, particularly in sparse, unconstrained view sett…
Post-Hoc Reversal: Are We Selecting Models Prematurely?
·2661 words·13 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Stanford University
Post-hoc model transformations can reverse performance trends, prompting a reevaluation of model selection strategies and suggesting a new ‘post-hoc selection’ method for improved model development.
Policy-shaped prediction: avoiding distractions in model-based reinforcement learning
·2695 words·13 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Stanford University
Policy-Shaped Prediction (PSP) improves model-based reinforcement learning by focusing world models on task-relevant information, significantly enhancing robustness against distracting stimuli.
PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher
·2966 words·14 mins·
loading
·
loading
Computer Vision
Image Generation
🏢 Stanford University
PaGoDA: Train high-resolution image generators efficiently by progressively growing a one-step generator from a low-resolution diffusion model. This innovative pipeline drastically cuts training cost…
Optimization Algorithm Design via Electric Circuits
·3889 words·19 mins·
loading
·
loading
AI Theory
Optimization
🏢 Stanford University
Design provably convergent optimization algorithms swiftly using electric circuit analogies; a novel methodology automating discretization for diverse algorithms.
Optimistic Verifiable Training by Controlling Hardware Nondeterminism
·1645 words·8 mins·
loading
·
loading
Machine Learning
Federated Learning
🏢 Stanford University
Researchers developed a verifiable training method that uses high-precision training with adaptive rounding and logging to achieve exact training replication across different GPUs, enabling efficient …
OPERA: Automatic Offline Policy Evaluation with Re-weighted Aggregates of Multiple Estimators
·2594 words·13 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Stanford University
OPERA: A new algorithm intelligently blends multiple offline policy evaluation estimators for more accurate policy performance estimates.
Off-Policy Selection for Initiating Human-Centric Experimental Design
·1760 words·9 mins·
loading
·
loading
AI Applications
Education
🏢 Stanford University
First-Glance Off-Policy Selection (FPS) revolutionizes human-centric AI by enabling personalized policy selection for new participants without prior data, improving learning and healthcare outcomes.
OccFusion: Rendering Occluded Humans with Generative Diffusion Priors
·2014 words·10 mins·
loading
·
loading
Computer Vision
3D Vision
🏢 Stanford University
OccFusion: High-fidelity human rendering from videos, even with occlusions, using 3D Gaussian splatting and 2D diffusion priors.
Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms
·2334 words·11 mins·
loading
·
loading
Machine Learning
Optimization
🏢 Stanford University
Newton Losses enhance training of neural networks with complex objectives by using second-order information from loss functions, achieving significant performance gains.
Neural decoding from stereotactic EEG: accounting for electrode variability across subjects
·1818 words·9 mins·
loading
·
loading
Machine Learning
Transfer Learning
🏢 Stanford University
Scalable SEEG decoding model, seegnificant, leverages transformers to decode behavior across subjects despite electrode variability, achieving high accuracy and transfer learning capability.
Near-Optimal Streaming Heavy-Tailed Statistical Estimation with Clipped SGD
·397 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 Stanford University
Clipped SGD achieves near-optimal sub-Gaussian rates for high-dimensional heavy-tailed statistical estimation in streaming settings, improving upon existing state-of-the-art results.
MoEUT: Mixture-of-Experts Universal Transformers
·2486 words·12 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Stanford University
MoEUT: Mixture-of-Experts Universal Transformers significantly improves the compute efficiency of Universal Transformers, making them competitive with standard Transformers in large-scale language mod…
Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems
·1594 words·8 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Stanford University
gpSLDS, a novel model, balances expressiveness and interpretability in modeling complex neural dynamics by combining Gaussian processes with switching linear dynamical systems, improving accuracy and …
Mixture of neural fields for heterogeneous reconstruction in cryo-EM
·4281 words·21 mins·
loading
·
loading
AI Generated
Computer Vision
3D Vision
🏢 Stanford University
Hydra: a novel cryo-EM reconstruction method resolves both conformational and compositional heterogeneity ab initio, enabling the analysis of complex, unpurified samples with state-of-the-art accuracy…
Make-it-Real: Unleashing Large Multimodal Model for Painting 3D Objects with Realistic Materials
·4579 words·22 mins·
loading
·
loading
AI Generated
Multimodal Learning
Vision-Language Models
🏢 Stanford University
Make-it-Real uses a large multimodal language model to automatically paint realistic materials onto 3D objects, drastically improving realism and saving developers time.
Learning Linear Causal Representations from General Environments: Identifiability and Intrinsic Ambiguity
·1476 words·7 mins·
loading
·
loading
AI Theory
Representation Learning
🏢 Stanford University
LiNGCREL, a novel algorithm, provably recovers linear causal representations from diverse environments, achieving identifiability despite intrinsic ambiguities, thus advancing causal AI.
Learning Formal Mathematics From Intrinsic Motivation
·1732 words·9 mins·
loading
·
loading
Reinforcement Learning
🏢 Stanford University
AI agent MINIMO learns to generate challenging mathematical conjectures and prove them, bootstrapping from axioms alone and self-improving in both conjecture generation and theorem proving.
Large language model validity via enhanced conformal prediction methods
·2089 words·10 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Stanford University
New conformal inference methods enhance LLM validity by providing adaptive validity guarantees and improving the quality of LLM outputs, addressing prior methods’ limitations.
Instructor-inspired Machine Learning for Robust Molecular Property Prediction
·2041 words·10 mins·
loading
·
loading
Machine Learning
Semi-Supervised Learning
🏢 Stanford University
InstructMol, a novel semi-supervised learning algorithm, leverages unlabeled data and an instructor model to significantly improve the accuracy and robustness of molecular property prediction, even wi…