Skip to main content

Machine Learning

Neuronal Competition Groups with Supervised STDP for Spike-Based Classification
·1778 words·9 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏒 Univ. Lille
Neuronal Competition Groups (NCGs) enhance supervised STDP training in spiking neural networks by promoting balanced competition and improved class separation, resulting in significantly higher classi…
NeuralSolver: Learning Algorithms For Consistent and Efficient Extrapolation Across General Tasks
·4139 words·20 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 INESC-ID
NeuralSolver: A novel recurrent solver efficiently and consistently extrapolates algorithms from smaller problems to larger ones, handling various problem sizes.
NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes
·4651 words·22 mins· loading · loading
Machine Learning Deep Learning 🏒 IBM Research
NeuralFuse: A novel add-on module learns input transformations to maintain accuracy in low-voltage DNN inference, achieving up to 57% accuracy recovery and 24% energy savings without retraining.
Neural P$^3$M: A Long-Range Interaction Modeling Enhancer for Geometric GNNs
·2015 words·10 mins· loading · loading
Machine Learning Deep Learning 🏒 Xi'an Jiaotong University
Neural PΒ³M enhances geometric GNNs by incorporating mesh points to model long-range interactions in molecules, achieving state-of-the-art accuracy in predicting energy and forces.
Neural Flow Diffusion Models: Learnable Forward Process for Improved Diffusion Modelling
·1763 words·9 mins· loading · loading
Machine Learning Deep Learning 🏒 University of Amsterdam
Neural Flow Diffusion Models (NFDM) revolutionize generative modeling by introducing a learnable forward process, resulting in state-of-the-art likelihoods and versatile generative dynamics.
Neural Embeddings Rank: Aligning 3D latent dynamics with movements
·2698 words·13 mins· loading · loading
Machine Learning Deep Learning 🏒 Johns Hopkins University
Neural Embeddings Rank (NER) aligns 3D latent neural dynamics with movements, enabling cross-session decoding and revealing consistent neural dynamics across brain areas.
Neural decoding from stereotactic EEG: accounting for electrode variability across subjects
·1818 words·9 mins· loading · loading
Machine Learning Transfer Learning 🏒 Stanford University
Scalable SEEG decoding model, seegnificant, leverages transformers to decode behavior across subjects despite electrode variability, achieving high accuracy and transfer learning capability.
Neural Conditional Probability for Uncertainty Quantification
·2341 words·11 mins· loading · loading
Machine Learning Deep Learning 🏒 CSML, Istituto Italiano Di Tecnologia
Neural Conditional Probability (NCP) offers a new operator-theoretic approach for efficiently learning conditional distributions, enabling streamlined inference and providing theoretical guarantees fo…
Neural Collapse To Multiple Centers For Imbalanced Data
·2279 words·11 mins· loading · loading
Machine Learning Deep Learning 🏒 Shanxi University
Researchers enhance imbalanced data classification by inducing Neural Collapse to Multiple Centers (NCMC) using a novel cosine loss function, achieving performance comparable to state-of-the-art metho…
Neural Collapse Inspired Feature Alignment for Out-of-Distribution Generalization
·1839 words·9 mins· loading · loading
Machine Learning Deep Learning 🏒 Tsinghua University
Neural Collapse-inspired Feature Alignment (NCFAL) significantly boosts out-of-distribution generalization by aligning semantic features to a simplex ETF, even without environment labels.
Neural Characteristic Activation Analysis and Geometric Parameterization for ReLU Networks
·2633 words·13 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏒 University of Cambridge
Researchers introduce Geometric Parameterization (GmP), a novel neural network parameterization resolving instability in ReLU network training, leading to faster convergence and better generalization.
Neuc-MDS: Non-Euclidean Multidimensional Scaling Through Bilinear Forms
·2034 words·10 mins· loading · loading
AI Generated Machine Learning Dimensionality Reduction 🏒 Rutgers University
Neuc-MDS: Revolutionizing multidimensional scaling by using bilinear forms for non-Euclidean data, minimizing errors, and resolving the dimensionality paradox!
Near-Optimality of Contrastive Divergence Algorithms
·280 words·2 mins· loading · loading
Machine Learning Unsupervised Learning 🏒 Gatsby Computational Neuroscience Unit, University College London
Contrastive Divergence algorithms achieve near-optimal parameter estimation rates, matching the CramΓ©r-Rao lower bound under specific conditions, as proven by a novel non-asymptotic analysis.
Near-Optimal Dynamic Regret for Adversarial Linear Mixture MDPs
·308 words·2 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 National Key Laboratory for Novel Software Technology, Nanjing University, China
Near-optimal dynamic regret is achieved for adversarial linear mixture MDPs with unknown transitions, bridging occupancy-measure and policy-based methods for superior performance.
Near-Optimal Distributionally Robust Reinforcement Learning with General $L_p$ Norms
·556 words·3 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏒 Ecole Polytechnique
This paper presents near-optimal sample complexity bounds for solving distributionally robust reinforcement learning problems with general Lp norms, showing robust RL can be more sample-efficient than…
Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity
·1858 words·9 mins· loading · loading
AI Generated Machine Learning Optimization 🏒 School of Data Science, Fudan University
SVOGS: Near-optimal distributed minimax optimization is achieved under second-order similarity, balancing communication, computation, and achieving near-optimal complexities.
Near-Minimax-Optimal Distributional Reinforcement Learning with a Generative Model
·1906 words·9 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 Google DeepMind
New distributional RL algorithm (DCFP) achieves near-minimax optimality for return distribution estimation in the generative model regime.
Navigating the Effect of Parametrization for Dimensionality Reduction
·3077 words·15 mins· loading · loading
Machine Learning Dimensionality Reduction 🏒 Duke University
ParamRepulsor, a novel parametric dimensionality reduction method, achieves state-of-the-art local structure preservation by mining hard negatives and using a tailored loss function.
Navigating Chemical Space with Latent Flows
·2900 words·14 mins· loading · loading
Machine Learning Deep Learning 🏒 Cornell University
ChemFlow: a new framework efficiently explores chemical space using latent flows, unifying existing methods & incorporating physical priors for molecule manipulation and optimization.
N-agent Ad Hoc Teamwork
·3605 words·17 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏒 University of Texas at Austin
New algorithm, POAM, excels at multi-agent cooperation by adapting to diverse and changing teammates in dynamic scenarios.