🏢 Amazon
Sample-Efficient Agnostic Boosting
·1303 words·7 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Amazon
Agnostic boosting gets a major efficiency upgrade! A new algorithm leverages sample reuse to drastically reduce the data needed for accurate learning, closing the gap with computationally expensive al…
Risk-Averse Fine-tuning of Large Language Models
·3716 words·18 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Amazon
Risk-Averse RLHF fine-tunes LLMs to minimize toxic outputs while maintaining performance.
Reconstruction Attacks on Machine Unlearning: Simple Models are Vulnerable
·2340 words·11 mins·
loading
·
loading
AI Theory
Privacy
🏢 Amazon
Deleting data from machine learning models exposes individuals to highly accurate reconstruction attacks, even when models are simple; this research demonstrates the vulnerability.
Pre-training Differentially Private Models with Limited Public Data
·3129 words·15 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 Amazon
Researchers achieved high-accuracy differentially private (DP) models by using a novel DP continual pre-training strategy with only 10% public data, mitigating the performance degradation common in DP…
Causal vs. Anticausal merging of predictors
·304 words·2 mins·
loading
·
loading
AI Theory
Causality
🏢 Amazon
Causal assumptions drastically alter predictor merging, with CMAXENT revealing logistic regression for causal and LDA for anticausal directions.