Skip to main content

Representation Learning

DropEdge not Foolproof: Effective Augmentation Method for Signed Graph Neural Networks
·2471 words·12 mins· loading · loading
AI Theory Representation Learning 🏢 Huazhong Agricultural University
SGA: A novel framework boosts Signed Graph Neural Network performance by addressing graph sparsity and unbalanced triangles, achieving up to 26.2% F1-micro improvement.
Divide-and-Conquer Predictive Coding: a structured Bayesian inference algorithm
·1683 words·8 mins· loading · loading
AI Theory Representation Learning 🏢 Department of Psychology, Vanderbilt University
Divide-and-conquer predictive coding (DCPC) revolutionizes structured Bayesian inference by achieving superior performance in high-dimensional problems while remaining biologically plausible.
Disentangling Interpretable Factors with Supervised Independent Subspace Principal Component Analysis
·3550 words·17 mins· loading · loading
AI Generated Machine Learning Representation Learning 🏢 Columbia University
Supervised Independent Subspace PCA (sisPCA) disentangles interpretable factors in high-dimensional data by leveraging supervision to maximize subspace dependence on target variables while minimizing …
Discrete Dictionary-based Decomposition Layer for Structured Representation Learning
·4466 words·21 mins· loading · loading
AI Generated Machine Learning Representation Learning 🏢 Kyungpook National University
Boosting structured representation learning, a novel Discrete Dictionary-based Decomposition (D3) layer significantly improves systematic generalization in TPR-based models by efficiently decomposing …
Diffusion Model with Cross Attention as an Inductive Bias for Disentanglement
·3205 words·16 mins· loading · loading
Representation Learning 🏢 Microsoft Research
Diffusion models with cross-attention: a powerful inductive bias for effortless disentanglement!
DECRL: A Deep Evolutionary Clustering Jointed Temporal Knowledge Graph Representation Learning Approach
·2468 words·12 mins· loading · loading
AI Generated Machine Learning Representation Learning 🏢 Zhejiang University
DECRL: A novel deep learning approach for temporal knowledge graph representation learning, capturing high-order correlation evolution and outperforming existing methods.
Decoupling Semantic Similarity from Spatial Alignment for Neural Networks.
·2318 words·11 mins· loading · loading
Computer Vision Representation Learning 🏢 Google DeepMind
Researchers developed semantic RSMs, a novel approach to measure semantic similarity in neural networks, improving image retrieval and aligning network representations with predicted class probabiliti…
Community Detection Guarantees using Embeddings Learned by Node2Vec
·2609 words·13 mins· loading · loading
AI Generated AI Theory Representation Learning 🏢 Columbia University
Node2Vec, a popular network embedding method, is proven to consistently recover community structure in stochastic block models, paving the way for more reliable unsupervised community detection.
Class Distribution Shifts in Zero-Shot Learning: Learning Robust Representations
·2470 words·12 mins· loading · loading
AI Theory Representation Learning 🏢 Hebrew University of Jerusalem
Zero-shot learning models often fail in real-world scenarios due to unseen class distribution shifts. This work introduces a novel algorithm that learns robust representations by creating synthetic d…
Causal Temporal Representation Learning with Nonstationary Sparse Transition
·2158 words·11 mins· loading · loading
AI Theory Representation Learning 🏢 Carnegie Mellon University
CtrlNS: A novel framework for causal temporal representation learning tackles the challenge of nonstationary time series by leveraging sparse transition assumptions, achieving improved accuracy in ide…
Can Transformers Smell Like Humans?
·2615 words·13 mins· loading · loading
AI Theory Representation Learning 🏢 KTH Royal Institute of Technology
Pre-trained transformer models can predict human smell perception by encoding odorant chemical structures, aligning with expert labels, continuous ratings, and similarity assessments.
Bisimulation Metrics are Optimal Transport Distances, and Can be Computed Efficiently
·1675 words·8 mins· loading · loading
AI Theory Representation Learning 🏢 Universitat Pompeu Fabra
Bisimulation metrics and optimal transport distances are equivalent and can be computed efficiently using a novel Sinkhorn Value Iteration algorithm.
Binding in hippocampal-entorhinal circuits enables compositionality in cognitive maps
·2222 words·11 mins· loading · loading
AI Theory Representation Learning 🏢 UC Berkeley
A novel model reveals how hippocampal-entorhinal circuits use compositional coding and modular attractor networks to enable robust and flexible spatial representation, advancing our understanding of c…
Are High-Degree Representations Really Unnecessary in Equivariant Graph Neural Networks?
·2234 words·11 mins· loading · loading
AI Theory Representation Learning 🏢 Gaoling School of Artificial Intelligence, Renmin University of China
High-degree representations significantly boost the expressiveness of E(3)-equivariant GNNs, overcoming limitations of lower-degree models on symmetric structures, as demonstrated theoretically and em…
Approximating mutual information of high-dimensional variables using learned representations
·2528 words·12 mins· loading · loading
AI Theory Representation Learning 🏢 Harvard University
Latent Mutual Information (LMI) approximation accurately estimates mutual information in high-dimensional data using low-dimensional learned representations, solving a critical problem in various scie…
An In-depth Investigation of Sparse Rate Reduction in Transformer-like Models
·2521 words·12 mins· loading · loading
AI Theory Representation Learning 🏢 School of Computing and Data Science, University of Hong Kong
Deep learning model interpretability improved via Sparse Rate Reduction (SRR), showing improved generalization and offering principled model design.
A Walsh Hadamard Derived Linear Vector Symbolic Architecture
·1922 words·10 mins· loading · loading
AI Theory Representation Learning 🏢 University of Maryland, Baltimore County
Hadamard-derived Linear Binding (HLB): A novel, efficient vector symbolic architecture surpassing existing methods in classical AI tasks and deep learning applications.