Skip to main content

🏒 Carnegie Mellon University

Analytically deriving Partial Information Decomposition for affine systems of stable and convolution-closed distributions
·1956 words·10 mins· loading · loading
AI Generated AI Theory Causality 🏒 Carnegie Mellon University
This paper presents novel theoretical results enabling the analytical calculation of Partial Information Decomposition for various probability distributions, including those relevant to neuroscience, …
Alignment for Honesty
·3666 words·18 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏒 Carnegie Mellon University
This paper introduces a novel framework for aligning LLMs with honesty, proposing new metrics and training techniques to make LLMs more truthful and less prone to confidently incorrect responses.
Aggregating Quantitative Relative Judgments: From Social Choice to Ranking Prediction
·2425 words·12 mins· loading · loading
AI Theory Optimization 🏒 Carnegie Mellon University
This paper introduces Quantitative Relative Judgment Aggregation (QRJA), a novel social choice model, and applies it to ranking prediction, yielding effective and interpretable results on various real…
Adversarially Robust Dense-Sparse Tradeoffs via Heavy-Hitters
·388 words·2 mins· loading · loading
AI Generated AI Theory Robustness 🏒 Carnegie Mellon University
Improved adversarially robust streaming algorithms for L_p estimation are presented, surpassing previous state-of-the-art space bounds and disproving the existence of inherent barriers.
Active, anytime-valid risk controlling prediction sets
·1276 words·6 mins· loading · loading
Machine Learning Active Learning 🏒 Carnegie Mellon University
This paper introduces anytime-valid risk-controlling prediction sets for active learning, guaranteeing low risk even with adaptive data collection and limited label budgets.
Achieving Domain-Independent Certified Robustness via Knowledge Continuity
·2020 words·10 mins· loading · loading
AI Theory Robustness 🏒 Carnegie Mellon University
Certifying neural network robustness across diverse domains, this paper introduces knowledge continuityβ€”a novel framework ensuring model stability independent of input type, norms, and distribution.
Accelerating ERM for data-driven algorithm design using output-sensitive techniques
·366 words·2 mins· loading · loading
AI Theory Optimization 🏒 Carnegie Mellon University
Accelerating ERM for data-driven algorithm design using output-sensitive techniques achieves computationally efficient learning by scaling with the actual number of pieces in the dual loss function, n…
A theoretical case-study of Scalable Oversight in Hierarchical Reinforcement Learning
·414 words·2 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 Carnegie Mellon University
Bounded human feedback hinders large AI model training. This paper introduces hierarchical reinforcement learning to enable scalable oversight, efficiently acquiring feedback and learning optimal poli…