Skip to main content

🏢 Harvard University

User-Creator Feature Polarization in Recommender Systems with Dual Influence
·2172 words·11 mins· loading · loading
AI Theory Optimization 🏢 Harvard University
Recommender systems, when influenced by both users and creators, inevitably polarize; however, prioritizing efficiency through methods like top-k truncation can surprisingly enhance diversity.
Unrolled denoising networks provably learn to perform optimal Bayesian inference
·2411 words·12 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Harvard University
Unrolled neural networks, trained via gradient descent, provably achieve optimal Bayesian inference for compressed sensing, surpassing prior-aware counterparts.
UniTS: A Unified Multi-Task Time Series Model
·4241 words·20 mins· loading · loading
Machine Learning Deep Learning 🏢 Harvard University
UniTS: one model to rule them all! This unified multi-task time series model excels in forecasting, classification, anomaly detection, and imputation, outperforming specialized models across 38 divers…
Unitary Convolutions for Learning on Graphs and Groups
·2134 words·11 mins· loading · loading
🏢 Harvard University
Stable deep learning on graphs achieved using novel unitary group convolutions, preventing over-smoothing and enhancing model robustness.
Trading off Consistency and Dimensionality of Convex Surrogates for Multiclass Classification
·1544 words·8 mins· loading · loading
AI Theory Optimization 🏢 Harvard University
Researchers achieve a balance between accuracy and efficiency in multiclass classification by introducing partially consistent surrogate losses and novel methods.
The Evolution of Statistical Induction Heads: In-Context Learning Markov Chains
·2128 words·10 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Harvard University
Transformers learn to perform in-context learning of Markov chains hierarchically, progressing from simpler unigram strategies to more complex bigram solutions, with the presence of simpler solutions …
Testing Calibration in Nearly-Linear Time
·1823 words·9 mins· loading · loading
AI Generated AI Theory Interpretability 🏢 Harvard University
This paper presents nearly-linear time algorithms for testing model calibration, improving upon existing methods and providing theoretical lower bounds for various calibration measures.
Stabilizing Linear Passive-Aggressive Online Learning with Weighted Reservoir Sampling
·7304 words·35 mins· loading · loading
AI Generated AI Applications Security 🏢 Harvard University
Weighted reservoir sampling stabilizes online learning algorithms by creating a robust ensemble of intermediate solutions, significantly improving accuracy and mitigating sensitivity to outliers.
SocialGPT: Prompting LLMs for Social Relation Reasoning via Greedy Segment Optimization
·2449 words·12 mins· loading · loading
Multimodal Learning Vision-Language Models 🏢 Harvard University
SocialGPT cleverly leverages Vision Foundation Models and Large Language Models for zero-shot social relation reasoning, achieving competitive results and offering interpretable outputs via prompt opt…
SkipPredict: When to Invest in Predictions for Scheduling
·2285 words·11 mins· loading · loading
AI Theory Optimization 🏢 Harvard University
SkipPredict optimizes scheduling by prioritizing cheap predictions and using expensive ones only when necessary, achieving cost-effective performance.
Pruning neural network models for gene regulatory dynamics using data and domain knowledge
·3492 words·17 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 Harvard University
DASH: a novel pruning framework leverages domain knowledge to improve the interpretability and sparsity of neural network models for gene regulatory dynamics, outperforming existing methods.
Partial observation can induce mechanistic mismatches in data-constrained models of neural dynamics
·1877 words·9 mins· loading · loading
AI Theory Generalization 🏢 Harvard University
Partially observing neural circuits during experiments can create misleading models, even if single neuron activity matches; researchers need better validation methods.
Order-Independence Without Fine Tuning
·1791 words·9 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Harvard University
Set-Based Prompting guarantees order-independent LLM outputs by modifying input representations, eliminating unwanted inconsistencies without fine-tuning.
Optimal ablation for interpretability
·3425 words·17 mins· loading · loading
AI Theory Interpretability 🏢 Harvard University
Optimal ablation (OA) improves model interpretability by precisely measuring component importance, outperforming existing methods. OA-based importance shines in circuit discovery, factual recall, and …
Multistable Shape from Shading Emerges from Patch Diffusion
·2364 words·12 mins· loading · loading
3D Vision 🏢 Harvard University
A novel diffusion model reconstructs multimodal shape distributions from shading, mirroring human multistable perception.
Multi-Group Proportional Representation in Retrieval
·4416 words·21 mins· loading · loading
AI Theory Fairness 🏢 Harvard University
Multi-group Proportional Representation (MPR) tackles skewed search results by measuring representation across intersectional groups, improving fairness in image retrieval.
Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE)
·4104 words·20 mins· loading · loading
AI Generated Multimodal Learning Vision-Language Models 🏢 Harvard University
SpLiCE unlocks CLIP’s potential by transforming its dense, opaque representations into sparse, human-interpretable concept embeddings.
Infinite Limits of Multi-head Transformer Dynamics
·4731 words·23 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 Harvard University
Researchers reveal how the training dynamics of transformer models behave at infinite width, depth, and head count, providing key insights for scaling up these models.
Honor Among Bandits: No-Regret Learning for Online Fair Division
·357 words·2 mins· loading · loading
AI Theory Fairness 🏢 Harvard University
Online fair division algorithm achieves Õ(T²/³) regret while guaranteeing envy-freeness or proportionality in expectation, a result proven tight.
Hardness of Learning Neural Networks under the Manifold Hypothesis
·2154 words·11 mins· loading · loading
🏢 Harvard University
Neural network learnability under the manifold hypothesis is hard except for efficiently sampleable manifolds.