Skip to main content

Representation Learning

Metric Space Magnitude for Evaluating the Diversity of Latent Representations
·6876 words·33 mins· loading · loading
AI Generated AI Theory Representation Learning 🏢 University of Edinburgh
Novel metric space magnitude measures rigorously quantify the diversity of latent representations across multiple scales, showing superior performance in detecting mode collapse and characterizing emb…
Measuring Dejavu Memorization Efficiently
·2794 words·14 mins· loading · loading
Computer Vision Representation Learning 🏢 FAIR at Meta
New method efficiently measures how well AI models memorize training data, revealing that open-source models memorize less than expected.
MatrixNet: Learning over symmetry groups using learned group representations
·1841 words·9 mins· loading · loading
AI Theory Representation Learning 🏢 Northeastern University
MatrixNet learns efficient group representations for improved deep learning on symmetry groups, achieving higher sample efficiency and generalization than existing methods.
Marrying Causal Representation Learning with Dynamical Systems for Science
·3100 words·15 mins· loading · loading
AI Generated AI Theory Representation Learning 🏢 Institute of Science and Technology Austria
This study marries causal representation learning with dynamical systems to enable parameter identification in real-world scientific data, unlocking downstream causal analysis for various applications…
Long-range Meta-path Search on Large-scale Heterogeneous Graphs
·2383 words·12 mins· loading · loading
Machine Learning Representation Learning 🏢 Huazhong University of Science and Technology
LMSPS: a novel framework efficiently leverages long-range dependencies in large heterogeneous graphs by dynamically identifying effective meta-paths, mitigating computational costs and over-smoothing.
Logical characterizations of recurrent graph neural networks with reals and floats
·334 words·2 mins· loading · loading
AI Theory Representation Learning 🏢 Tampere University
Recurrent Graph Neural Networks (GNNs) with real and floating-point numbers are precisely characterized by rule-based and infinitary modal logics, respectively, enabling a deeper understanding of thei…
Leveraging Contrastive Learning for Enhanced Node Representations in Tokenized Graph Transformers
·1604 words·8 mins· loading · loading
Machine Learning Representation Learning 🏢 Huazhong University of Science and Technology
GCFormer, a novel graph Transformer, enhances node representation learning by employing a hybrid token generator and contrastive learning, outperforming existing methods on various datasets.
Learning to Shape In-distribution Feature Space for Out-of-distribution Detection
·1662 words·8 mins· loading · loading
Machine Learning Representation Learning 🏢 Hong Kong Baptist University
Deterministically shaping in-distribution feature space solves OOD detection’s distributional assumption challenge, leading to superior performance.
Learning Structured Representations with Hyperbolic Embeddings
·3560 words·17 mins· loading · loading
Computer Vision Representation Learning 🏢 University of Illinois, Urbana-Champaign
HypStructure boosts representation learning by embedding label hierarchies into hyperbolic space, improving accuracy and interpretability.
Learning Structure-Aware Representations of Dependent Types
·1855 words·9 mins· loading · loading
AI Theory Representation Learning 🏢 Aalto University
This research pioneers the integration of machine learning with the dependently-typed programming language Agda, introducing a novel dataset and neural architecture for faithful program representation…
Learning Representations for Hierarchies with Minimal Support
·2038 words·10 mins· loading · loading
AI Theory Representation Learning 🏢 University of Massachusetts Amherst
Learn graph representations efficiently by identifying the minimal data needed to uniquely define a graph’s structure, achieving robust performance with fewer resources.
Learning Place Cell Representations and Context-Dependent Remapping
·2971 words·14 mins· loading · loading
AI Generated AI Theory Representation Learning 🏢 Simula Research Laboratory
Neural networks learn place cell-like representations and context-dependent remapping using a novel similarity-based objective function, providing insights into hippocampal encoding.
Learning Partitions from Context
·440 words·3 mins· loading · loading
AI Generated AI Theory Representation Learning 🏢 Max Planck Institute for Intelligent Systems
Learning hidden structures from sparse interactions in data is computationally hard but can be achieved with sufficient samples using gradient-based methods; This is shown by analyzing the gradient dy…
Learning Linear Causal Representations from General Environments: Identifiability and Intrinsic Ambiguity
·1476 words·7 mins· loading · loading
AI Theory Representation Learning 🏢 Stanford University
LiNGCREL, a novel algorithm, provably recovers linear causal representations from diverse environments, achieving identifiability despite intrinsic ambiguities, thus advancing causal AI.
Learning Identifiable Factorized Causal Representations of Cellular Responses
·2593 words·13 mins· loading · loading
AI Theory Representation Learning 🏢 Genentech
FCR, a novel method, reveals causal structure in single-cell perturbation data by learning disentangled cellular representations specific to covariates, treatments, and their interactions, outperformi…
Learning Human-like Representations to Enable Learning Human Values
·2442 words·12 mins· loading · loading
AI Theory Representation Learning 🏢 Princeton University
Aligning AI’s world representation with humans enables faster, safer learning of human values, improving both exploration and generalization.
Learning diverse causally emergent representations from time series data
·2251 words·11 mins· loading · loading
AI Theory Representation Learning 🏢 Department of Computing, Imperial College London
AI learns emergent system features from time-series data using a novel differentiable architecture maximizing causal emergence, outperforming pure mutual information maximization.
Learning Complete Protein Representation by Dynamically Coupling of Sequence and Structure
·2792 words·14 mins· loading · loading
AI Generated Natural Language Processing Representation Learning 🏢 Zhejiang University
CoupleNet dynamically links protein sequences and structures for improved representations, surpassing state-of-the-art methods in function prediction, particularly for uncommon proteins.
Learning Better Representations From Less Data For Propositional Satisfiability
·2124 words·10 mins· loading · loading
AI Theory Representation Learning 🏢 CISPA Helmholtz Center for Information Security
NeuRes, a novel neuro-symbolic approach, achieves superior SAT solving accuracy using significantly less training data than existing methods by combining certificate-driven learning with expert iterat…
Latent Functional Maps: a spectral framework for representation alignment
·2758 words·13 mins· loading · loading
AI Generated Machine Learning Representation Learning 🏢 IST Austria
Latent Functional Maps (LFM) offers a novel spectral framework for comparing, aligning, and transferring neural network representations, boosting downstream task performance and interpretability.