Skip to main content

🏢 ETH Zurich

Binarized Diffusion Model for Image Super-Resolution
·1567 words·8 mins· loading · loading
Computer Vision Image Generation 🏢 ETH Zurich
BI-DiffSR, a novel binarized diffusion model, achieves high-quality image super-resolution with significantly reduced memory and computational costs, outperforming existing methods.
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?
·3015 words·15 mins· loading · loading
AI Theory Interpretability 🏢 ETH Zurich
This paper presents a novel method to make black box neural networks intervenable using only a small validation set with concept labels, improving the effectiveness of concept-based interventions.
BetterDepth: Plug-and-Play Diffusion Refiner for Zero-Shot Monocular Depth Estimation
·3663 words·18 mins· loading · loading
Computer Vision 3D Vision 🏢 ETH Zurich
BetterDepth: A plug-and-play diffusion refiner boosts zero-shot monocular depth estimation by adding fine details while preserving accurate geometry.
Bayes-optimal learning of an extensive-width neural network from quadratically many samples
·1425 words·7 mins· loading · loading
AI Theory Optimization 🏢 ETH Zurich
This study solves a key challenge in neural network learning, deriving a closed-form expression for the Bayes-optimal test error of extensive-width networks with quadratic activation functions from qu…
Bandits with Preference Feedback: A Stackelberg Game Perspective
·1514 words·8 mins· loading · loading
Machine Learning Optimization 🏢 ETH Zurich
MAXMINLCB, a novel game-theoretic algorithm, efficiently solves bandit problems with preference feedback over continuous domains, providing anytime-valid, rate-optimal regret guarantees.
Achieving Near-Optimal Convergence for Distributed Minimax Optimization with Adaptive Stepsizes
·2347 words·12 mins· loading · loading
AI Generated Machine Learning Federated Learning 🏢 ETH Zurich
D-AdaST: A novel distributed adaptive minimax optimization method achieves near-optimal convergence by tracking stepsizes, solving the inconsistency problem hindering existing adaptive methods.
Achievable distributional robustness when the robust risk is only partially identified
·1876 words·9 mins· loading · loading
AI Generated AI Theory Robustness 🏢 ETH Zurich
This paper introduces a novel framework for evaluating the robustness of machine learning models when the true data distribution is only partially known. It defines a new risk measure (‘identifiable r…