Skip to main content

🏢 Rutgers University

Slot State Space Models
·2613 words·13 mins· loading · loading
Computer Vision Video Understanding 🏢 Rutgers University
SlotSSMs: a novel framework for modular sequence modeling, achieving significant performance gains by incorporating independent mechanisms and sparse interactions into State Space Models.
Neuc-MDS: Non-Euclidean Multidimensional Scaling Through Bilinear Forms
·2034 words·10 mins· loading · loading
AI Generated Machine Learning Dimensionality Reduction 🏢 Rutgers University
Neuc-MDS: Revolutionizing multidimensional scaling by using bilinear forms for non-Euclidean data, minimizing errors, and resolving the dimensionality paradox!
Learning World Models for Unconstrained Goal Navigation
·2782 words·14 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 Rutgers University
MUN: a novel goal-directed exploration algorithm significantly improves world model reliability and policy generalization in sparse-reward goal-conditioned RL, enabling efficient navigation across div…
Learning from Teaching Regularization: Generalizable Correlations Should be Easy to Imitate
·2208 words·11 mins· loading · loading
Machine Learning Deep Learning 🏢 Rutgers University
Boost deep learning generalization with Learning from Teaching (LOT)! LOT trains auxiliary ‘student’ models to imitate a primary ’teacher’ model, improving the teacher’s ability to capture generalizab…
Exploring the Edges of Latent State Clusters for Goal-Conditioned Reinforcement Learning
·3530 words·17 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏢 Rutgers University
CE2: A new goal-directed exploration algorithm for efficient reinforcement learning in unknown environments, prioritizing accessible frontier goals via latent state clustering.
Estimating Generalization Performance Along the Trajectory of Proximal SGD in Robust Regression
·1899 words·9 mins· loading · loading
AI Theory Optimization 🏢 Rutgers University
New consistent estimators precisely track generalization error during robust regression’s iterative model training, enabling optimal stopping iteration for minimized error.
BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models
·3160 words·15 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏢 Rutgers University
BLoB: Bayesian Low-Rank Adaptation by Backpropagation enhances LLMs by jointly tuning mean and covariance of parameters during fine-tuning, improving uncertainty estimation and generalization.