🏢 Google Research
Linear Regression using Heterogeneous Data Batches
·1554 words·8 mins·
loading
·
loading
Meta Learning
🏢 Google Research
New algorithm efficiently solves linear regression with heterogeneous data batches, handling diverse input distributions and achieving high accuracy with fewer samples.
Learning-Augmented Approximation Algorithms for Maximum Cut and Related Problems
·249 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 Google Research
This paper shows how noisy predictions about optimal solutions can improve approximation algorithms for NP-hard problems like MAX-CUT, exceeding classical hardness bounds.
Learning Generalized Linear Programming Value Functions
·1999 words·10 mins·
loading
·
loading
AI Theory
Optimization
🏢 Google Research
Learn optimal LP values faster with a novel neural network method!
IllumiNeRF: 3D Relighting Without Inverse Rendering
·2411 words·12 mins·
loading
·
loading
Computer Vision
3D Vision
🏢 Google Research
IllumiNeRF: Relightable 3D reconstruction without inverse rendering using image diffusion and NeRF.
Hyperbolic Embeddings of Supervised Models
·2703 words·13 mins·
loading
·
loading
Machine Learning
Representation Learning
🏢 Google Research
This paper presents a novel approach for embedding supervised models in hyperbolic space, linking loss functions to hyperbolic distances and introducing monotonic decision trees for unambiguous visual…
How to Boost Any Loss Function
·3432 words·17 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 Google Research
Boosting, traditionally limited by assumptions about loss functions, is proven in this paper to efficiently optimize any loss function regardless of differentiability or convexity.
Generative Forests
·4100 words·20 mins·
loading
·
loading
Machine Learning
Generative Learning
🏢 Google Research
Generative Forests (GFs) revolutionize tabular data generation with a novel forest-based model and a simple boosting algorithm offering strong convergence guarantees, significantly outperforming curre…
Extending Video Masked Autoencoders to 128 frames
·2466 words·12 mins·
loading
·
loading
Computer Vision
Video Understanding
🏢 Google Research
Long-video masked autoencoders (LVMAE) achieve state-of-the-art performance by using an adaptive masking strategy that prioritizes important video tokens, enabling efficient training on 128 frames.
Embedding-Aligned Language Models
·2605 words·13 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Google Research
EAGLE: Guiding LLMs using latent embeddings for controlled text generation.
Efficiency of the First-Price Auction in the Autobidding World
·465 words·3 mins·
loading
·
loading
AI Theory
Optimization
🏢 Google Research
First-price auction efficiency in autobidding plummets to 45.7% with mixed bidders, but machine-learned advice restores optimality.
DynaMITE-RL: A Dynamic Model for Improved Temporal Meta-Reinforcement Learning
·2927 words·14 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 Google Research
DynaMITE-RL: A new meta-RL approach masters environments with evolving latent states by cleverly modeling episode sessions and refining existing meta-RL techniques.
Differentially Private Optimization with Sparse Gradients
·1282 words·7 mins·
loading
·
loading
AI Theory
Privacy
🏢 Google Research
This paper presents new, nearly optimal differentially private algorithms for handling sparse gradients, significantly improving efficiency and scalability in large embedding models.
Convergence of No-Swap-Regret Dynamics in Self-Play
·1267 words·6 mins·
loading
·
loading
AI Theory
Optimization
🏢 Google Research
In symmetric zero-sum games, no-swap-regret dynamics guarantee strong convergence to Nash Equilibrium under symmetric initial conditions, but this advantage disappears when constraints are relaxed.
Contracting with a Learning Agent
·2554 words·12 mins·
loading
·
loading
AI Theory
Optimization
🏢 Google Research
Repeated contracts with learning agents are optimized by a simple dynamic contract: initially linear, then switching to zero-cost, causing the agent’s actions to ‘free-fall’ and yield non-zero rewards…
Cardinality-Aware Set Prediction and Top-$k$ Classification
·1676 words·8 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Google Research
This paper proposes cardinality-aware top-k classification, improving accuracy and efficiency by dynamically adjusting prediction set sizes.
Beating Adversarial Low-Rank MDPs with Unknown Transition and Bandit Feedback
·355 words·2 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 Google Research
New algorithms conquer adversarial low-rank MDPs, improving regret bounds for unknown transitions and bandit feedback.
Autobidder's Dilemma: Why More Sophisticated Autobidders Lead to Worse Auction Efficiency
·332 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 Google Research
More sophisticated autobidders surprisingly worsen online auction efficiency; a fine-grained analysis reveals that less powerful, uniform bidders lead to better market outcomes.
Auditing Privacy Mechanisms via Label Inference Attacks
·1460 words·7 mins·
loading
·
loading
AI Theory
Privacy
🏢 Google Research
New metrics audit label privatization, revealing differentially private schemes often outperform heuristic methods in the privacy-utility tradeoff.
Approximating the Top Eigenvector in Random Order Streams
·341 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 Google Research
Random-order stream data necessitates efficient top eigenvector approximation; this paper presents novel algorithms with improved space complexity, achieving near-optimal bounds.