Skip to main content

Posters

2024

Globally Q-linear Gauss-Newton Method for Overparameterized Non-convex Matrix Sensing
·1454 words·7 mins· loading · loading
Machine Learning Optimization 🏢 School of Mathematics and Statistics, Xidian University
A globally Q-linearly converging Gauss-Newton method (AGN) is introduced for overparameterized non-convex low-rank matrix sensing, significantly improving convergence compared to existing gradient des…
Globally Convergent Variational Inference
·2036 words·10 mins· loading · loading
AI Generated AI Theory Optimization 🏢 University of Michigan
Researchers achieve globally convergent variational inference by minimizing the expected forward KL divergence, overcoming the limitations of traditional methods.
Global Rewards in Restless Multi-Armed Bandits
·2031 words·10 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 Carnegie Mellon University
Restless multi-armed bandits with global rewards (RMAB-G) are introduced, extending the model to handle non-separable rewards and offering novel index-based and adaptive policies that outperform exist…
Global Lyapunov functions: a long-standing open problem in mathematics, with symbolic transformers
·2454 words·12 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏢 Meta AI
AI-powered sequence-to-sequence transformers surpass human and algorithmic abilities in discovering Lyapunov functions for dynamical systems, solving a long-standing open problem in mathematics.
Global Distortions from Local Rewards: Neural Coding Strategies in Path-Integrating Neural Systems
·3589 words·17 mins· loading · loading
AI Generated AI Theory Representation Learning 🏢 UC Santa Barbara
Reward-driven distortions in grid cell patterns are global, not local, preserving path integration while encoding environmental landmarks in spatial navigation.
Global Convergence in Training Large-Scale Transformers
·398 words·2 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Princeton University
Large-scale Transformer training’s global convergence is proven using weight decay regularization and a refined mean-field analysis, bridging theory and practice.
GLinSAT: The General Linear Satisfiability Neural Network Layer By Accelerated Gradient Descent
·1911 words·9 mins· loading · loading
AI Theory Optimization 🏢 Tsinghua University
GLinSAT: A novel neural network layer efficiently solves general linear constraint satisfaction problems via accelerated gradient descent, enabling differentiable backpropagation and improved GPU perf…
Gliding over the Pareto Front with Uniform Designs
·2556 words·12 mins· loading · loading
AI Theory Optimization 🏢 Computer Science, City University of Hong Kong
UMOD: a novel multi-objective optimization algorithm efficiently generates uniformly distributed Pareto-optimal solutions by maximizing minimal pairwise distances, providing high-quality representatio…
GL-NeRF: Gauss-Laguerre Quadrature Enables Training-Free NeRF Acceleration
·2622 words·13 mins· loading · loading
AI Generated Computer Vision 3D Vision 🏢 Carnegie Mellon University
GL-NeRF accelerates NeRF rendering by using Gauss-Laguerre quadrature, drastically reducing MLP calls without needing additional networks or data structures.
GITA: Graph to Visual and Textual Integration for Vision-Language Graph Reasoning
·2396 words·12 mins· loading · loading
Multimodal Learning Vision-Language Models 🏢 Hong Kong University of Science and Technology
GITA, a novel framework, integrates visual graphs into language models for superior vision-language graph reasoning, outperforming existing LLMs and introducing the first vision-language dataset, GVLQ…
GFT: Graph Foundation Model with Transferable Tree Vocabulary
·4791 words·23 mins· loading · loading
AI Generated Machine Learning Transfer Learning 🏢 University of Notre Dame
GFT: a novel graph foundation model using transferable computation trees as tokens, improving generalization and reducing negative transfer in graph learning.
GFlowNet Assisted Biological Sequence Editing
·2128 words·10 mins· loading · loading
AI Applications Healthcare 🏢 UC Irvine
GFNSeqEditor, a novel biological sequence editing algorithm, efficiently enhances desired properties while minimizing edits using generative flow networks, surpassing existing methods.
Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment
·1742 words·9 mins· loading · loading
Natural Language Processing Large Language Models 🏢 University of Minnesota
Reward learning from human demonstrations enhances supervised fine-tuning (SFT) for better LLM alignment.
GeoNLF: Geometry guided Pose-Free Neural LiDAR Fields
·1769 words·9 mins· loading · loading
Computer Vision 3D Vision 🏢 Tongji University
GeoNLF: Geometry-guided Pose-free Neural LiDAR Fields revolutionizes LiDAR point cloud processing by cleverly combining neural and geometric optimization for superior novel view synthesis and multi-vi…
Geometry-aware training of factorized layers in tensor Tucker format
·1715 words·9 mins· loading · loading
Machine Learning Deep Learning 🏢 Gran Sasso Science Institute
Train factorized neural network layers efficiently with Geometry-aware training in Tucker format (TDLRT)!
Geometry of naturalistic object representations in recurrent neural network models of working memory
·4025 words·19 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 IBM Research
RNNs represent naturalistic objects in WM using chronological subspaces, defying traditional slot models; object features are less orthogonalized in RNNs vs. perceptual space.
Geometry Cloak: Preventing TGS-based 3D Reconstruction from Copyrighted Images
·4369 words·21 mins· loading · loading
AI Generated Computer Vision 3D Vision 🏢 Hong Kong Baptist University
Geometry Cloak embeds invisible perturbations in images to thwart AI-based 3D reconstruction, forcing the AI to generate identifiable patterns that act as watermarks to assert copyright.
Geometry Awakening: Cross-Geometry Learning Exhibits Superiority over Individual Structures
·2651 words·13 mins· loading · loading
Machine Learning Deep Learning 🏢 School of Artificial Intelligence, Jilin University
Cross-geometry learning using knowledge distillation significantly improves GNN performance by leveraging both Euclidean and hyperbolic geometric properties of graph data.
Geometric-Averaged Preference Optimization for Soft Preference Labels
·2987 words·15 mins· loading · loading
Natural Language Processing Large Language Models 🏢 University of Tokyo
Improving LLM alignment, this paper introduces soft preference labels & geometric averaging in Direct Preference Optimization, consistently improving performance on standard benchmarks.
Geometric Trajectory Diffusion Models
·3387 words·16 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 Stanford University
GeoTDM: First diffusion model generating realistic 3D geometric trajectories, capturing complex spatial interactions and temporal correspondence, significantly improving generation quality.