AI Theory
Harnessing Multiple Correlated Networks for Exact Community Recovery
·2191 words·11 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 Northwestern University
Unlocking latent community structures from multiple correlated networks is now possible with greater precision, as this research pinpoints the information-theoretic threshold for exact recovery, even …
HardCore Generation: Generating Hard UNSAT Problems for Data Augmentation
·2159 words·11 mins·
loading
·
loading
AI Theory
Optimization
🏢 McGill University
HardCore: Fast generation of hard, realistic UNSAT problems for improved SAT solver runtime prediction.
Group-wise oracle-efficient algorithms for online multi-group learning
·316 words·2 mins·
loading
·
loading
AI Theory
Fairness
🏢 Columbia University
Oracle-efficient algorithms conquer online multi-group learning, achieving sublinear regret even with massive, overlapping groups, paving the way for fair and efficient large-scale online systems.
GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models
·2613 words·13 mins·
loading
·
loading
AI Theory
Robustness
🏢 Chinese University of Hong Kong
GREAT Score: A novel framework using generative models for efficiently and accurately evaluating the global robustness of machine learning models against adversarial attacks.
GraphTrail: Translating GNN Predictions into Human-Interpretable Logical Rules
·2764 words·13 mins·
loading
·
loading
AI Theory
Interpretability
🏢 IIT Delhi
GRAPHTRAIL unveils the first end-to-end global GNN explainer, translating black-box GNN predictions into easily interpretable boolean formulas over subgraph concepts, achieving significant improvement…
Graphcode: Learning from multiparameter persistent homology using graph neural networks
·2894 words·14 mins·
loading
·
loading
AI Generated
AI Theory
Representation Learning
🏢 Graz University of Technology
Graphcodes efficiently summarize complex datasets’ topological properties using graph neural networks, enhancing machine learning accuracy.
Graph Neural Networks and Arithmetic Circuits
·465 words·3 mins·
loading
·
loading
AI Generated
AI Theory
Generalization
🏢 Leibniz University Hanover
Graph Neural Networks’ (GNNs) computational power precisely mirrors that of arithmetic circuits, as proven via a novel C-GNN model; this reveals fundamental limits to GNN scalability.
Gradient-Variation Online Learning under Generalized Smoothness
·271 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 National Key Laboratory for Novel Software Technology, Nanjing University, China
This paper presents a novel optimistic mirror descent algorithm achieving optimal gradient-variation regret under generalized smoothness, applicable across convex, strongly convex functions, and fast-…
Gradient Methods for Online DR-Submodular Maximization with Stochastic Long-Term Constraints
·346 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 Iowa State University
Novel gradient-based algorithms achieve O(√T) regret and O(T3/4) constraint violation for online DR-submodular maximization with stochastic long-term constraints.
Gradient Guidance for Diffusion Models: An Optimization Perspective
·2233 words·11 mins·
loading
·
loading
AI Theory
Optimization
🏢 Princeton University
This paper provides a novel optimization framework for guided diffusion models, proving Õ(1/K) convergence for concave objective functions and demonstrating structure-preserving guidance.
Globally Convergent Variational Inference
·2036 words·10 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 University of Michigan
Researchers achieve globally convergent variational inference by minimizing the expected forward KL divergence, overcoming the limitations of traditional methods.
Global Distortions from Local Rewards: Neural Coding Strategies in Path-Integrating Neural Systems
·3589 words·17 mins·
loading
·
loading
AI Generated
AI Theory
Representation Learning
🏢 UC Santa Barbara
Reward-driven distortions in grid cell patterns are global, not local, preserving path integration while encoding environmental landmarks in spatial navigation.
Global Convergence in Training Large-Scale Transformers
·398 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 Princeton University
Large-scale Transformer training’s global convergence is proven using weight decay regularization and a refined mean-field analysis, bridging theory and practice.
GLinSAT: The General Linear Satisfiability Neural Network Layer By Accelerated Gradient Descent
·1911 words·9 mins·
loading
·
loading
AI Theory
Optimization
🏢 Tsinghua University
GLinSAT: A novel neural network layer efficiently solves general linear constraint satisfaction problems via accelerated gradient descent, enabling differentiable backpropagation and improved GPU perf…
Gliding over the Pareto Front with Uniform Designs
·2556 words·12 mins·
loading
·
loading
AI Theory
Optimization
🏢 Computer Science, City University of Hong Kong
UMOD: a novel multi-objective optimization algorithm efficiently generates uniformly distributed Pareto-optimal solutions by maximizing minimal pairwise distances, providing high-quality representatio…
Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning
·2253 words·11 mins·
loading
·
loading
AI Theory
Optimization
🏢 Stanford University
Unbalanced initializations dramatically accelerate neural network feature learning by modifying the geometry of learning trajectories, enabling faster feature extraction and improved generalization.
Generalization of Hamiltonian algorithms
·344 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Generalization
🏢 Istituto Italiano Di Tecnologia
New, tighter generalization bounds are derived for a class of stochastic learning algorithms that generate absolutely continuous probability distributions; enhancing our understanding of their perform…
Generalization Error Bounds for Two-stage Recommender Systems with Tree Structure
·386 words·2 mins·
loading
·
loading
AI Theory
Generalization
🏢 University of Science and Technology of China
Two-stage recommender systems using tree structures achieve better generalization with more branches and harmonized training data distributions across stages.
Generalization Bounds via Conditional $f$-Information
·358 words·2 mins·
loading
·
loading
AI Theory
Generalization
🏢 Tongji University
New information-theoretic generalization bounds, based on conditional f-information, improve existing methods by addressing unboundedness and offering a generic approach applicable to various loss fun…
Generalization Bound and Learning Methods for Data-Driven Projections in Linear Programming
·1748 words·9 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 University of Tokyo
Learn to project, solve faster! This paper introduces data-driven projections for solving high-dimensional linear programs, proving theoretical guarantees and demonstrating significant improvements in…