Skip to main content

🏢 University of California, Los Angeles

Theoretical and Empirical Insights into the Origins of Degree Bias in Graph Neural Networks
·2828 words·14 mins· loading · loading
AI Theory Fairness 🏢 University of California, Los Angeles
Researchers unveil the origins of degree bias in Graph Neural Networks (GNNs), proving high-degree nodes’ lower misclassification probability and proposing methods to alleviate this bias for fairer GN…
The Star Geometry of Critic-Based Regularizer Learning
·1709 words·9 mins· loading · loading
Machine Learning Unsupervised Learning 🏢 University of California, Los Angeles
Star geometry reveals optimal data-driven regularizers!
Statistical Estimation in the Spiked Tensor Model via the Quantum Approximate Optimization Algorithm
·1516 words·8 mins· loading · loading
AI Theory Optimization 🏢 University of California, Los Angeles
Quantum Approximate Optimization Algorithm (QAOA) achieves weak recovery in spiked tensor models matching classical methods, but with potential constant factor advantages for certain parameters.
Molecule Design by Latent Prompt Transformer
·2788 words·14 mins· loading · loading
🏢 University of California, Los Angeles
Latent Prompt Transformer (LPT) revolutionizes molecule design by unifying generation and optimization, achieving high efficiency in discovering novel molecules with desired properties.
Identifying Causal Effects Under Functional Dependencies
·1446 words·7 mins· loading · loading
AI Theory Causality 🏢 University of California, Los Angeles
Unlocking identifiability of causal effects: This paper leverages functional dependencies in causal graphs to improve identifiability, leading to fewer needed variables in observational data.
How does PDE order affect the convergence of PINNs?
·2124 words·10 mins· loading · loading
AI Generated AI Theory Optimization 🏢 University of California, Los Angeles
Higher-order PDEs hinder Physics-Informed Neural Network (PINN) convergence; this paper provides theoretical explanation and proposes variable splitting for improved accuracy.
Benign overfitting in leaky ReLU networks with moderate input dimension
·366 words·2 mins· loading · loading
AI Theory Generalization 🏢 University of California, Los Angeles
Leaky ReLU networks exhibit benign overfitting under surprisingly relaxed conditions: input dimension only needs to linearly scale with sample size, challenging prior assumptions in the field.