Posters
2024
Private Algorithms for Stochastic Saddle Points and Variational Inequalities: Beyond Euclidean Geometry
·315 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 Ohio State University
This paper presents novel, privacy-preserving algorithms achieving near-optimal rates for solving stochastic saddle point problems and variational inequalities in non-Euclidean geometries.
Privacy without Noisy Gradients: Slicing Mechanism for Generative Model Training
·1432 words·7 mins·
loading
·
loading
AI Theory
Privacy
🏢 MIT-IBM Watson AI Lab, IBM Research
Train high-quality generative models with strong differential privacy using a novel slicing mechanism that injects noise into random low-dimensional data projections, avoiding noisy gradients.
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models
·1757 words·9 mins·
loading
·
loading
AI Theory
Privacy
🏢 Google DeepMind
Researchers reveal ‘privacy backdoors,’ a new attack that exploits pre-trained models to leak user training data, highlighting critical vulnerabilities and prompting stricter model security measures.
Prism: A Framework for Decoupling and Assessing the Capabilities of VLMs
·4267 words·21 mins·
loading
·
loading
AI Generated
Multimodal Learning
Vision-Language Models
🏢 Nanjing University
Prism: a novel framework disentangles perception and reasoning in Vision-Language Models (VLMs) for improved model assessment and efficient VLM development.
Prior-itizing Privacy: A Bayesian Approach to Setting the Privacy Budget in Differential Privacy
·1883 words·9 mins·
loading
·
loading
AI Theory
Privacy
🏢 Department of Statistical Science
This paper introduces a Bayesian approach to setting the privacy budget in differential privacy, enabling agencies to balance data utility and confidentiality by customizing risk profiles.
Principled Probabilistic Imaging using Diffusion Models as Plug-and-Play Priors
·2800 words·14 mins·
loading
·
loading
Computer Vision
Image Generation
🏢 Department of Computing and Mathematical Sciences, Caltech
Principled Probabilistic Imaging uses diffusion models as plug-and-play priors for accurate posterior sampling in inverse problems, surpassing existing methods.
Pricing and Competition for Generative AI
·1529 words·8 mins·
loading
·
loading
AI Generated
AI Applications
Human-AI Interaction
🏢 NVIDIA & University of Ottawa
Generative AI’s unique characteristics necessitate new pricing strategies; this paper models a sequential pricing game between competing firms, revealing the first-mover’s performance needs to be sign…
Preventing Model Collapse in Deep Canonical Correlation Analysis by Noise Regularization
·2437 words·12 mins·
loading
·
loading
Multimodal Learning
Representation Learning
🏢 Hong Kong Polytechnic University
Noise Regularization rescues Deep Canonical Correlation Analysis from model collapse!
Preventing Dimensional Collapse in Self-Supervised Learning via Orthogonality Regularization
·2561 words·13 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
🏢 Hong Kong Polytechnic University
Orthogonal regularization prevents dimensional collapse in self-supervised learning, significantly boosting model performance across diverse benchmarks.
Pretraining with Random Noise for Fast and Robust Learning without Weight Transport
·1659 words·8 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Korea Advanced Institute of Science and Technology
Random noise pretraining dramatically speeds up and enhances neural network learning without weight transport, mimicking the brain’s developmental process and achieving performance comparable to backp…
Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs
·2558 words·13 mins·
loading
·
loading
Machine Learning
Few-Shot Learning
🏢 Caltech
CoDA-NO, a novel neural operator, revolutionizes multiphysics PDE solving via codomain tokenization, enabling efficient self-supervised pretraining and few-shot learning for superior generalization.
Pretrained Transformer Efficiently Learns Low-Dimensional Target Functions In-Context
·1358 words·7 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 University of California, Berkeley
Pretrained transformers surprisingly learn low-dimensional nonlinear functions efficiently from few in-context examples, outperforming baseline algorithms.
Pretrained Optimization Model for Zero-Shot Black Box Optimization
·4005 words·19 mins·
loading
·
loading
Machine Learning
Optimization
🏢 Xidian University
Pretrained Optimization Model (POM) excels at zero-shot black-box optimization, outperforming existing methods, especially in high dimensions, through direct application or few-shot fine-tuning.
PrefPaint: Aligning Image Inpainting Diffusion Model with Human Preference
·4348 words·21 mins·
loading
·
loading
Computer Vision
Image Generation
🏢 City University of Hong Kong
PrefPaint: Aligning image inpainting diffusion models with human preferences using reinforcement learning, resulting in significantly improved visual appeal.
Preferential Normalizing Flows
·2903 words·14 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 University of Helsinki
Eliciting high-dimensional probability distributions from experts using only preference comparisons is achieved via normalizing flows and a novel functional prior, resolving the problem of collapsing …
Preference-based Pure Exploration
·357 words·2 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 University of Michigan
PreTS algorithm efficiently identifies the most preferred policy in bandit problems with vector-valued rewards, achieving asymptotically optimal sample complexity.
Preference Learning of Latent Decision Utilities with a Human-like Model of Preferential Choice
·2258 words·11 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Aalto University
Human-like choice modeling revolutionizes preference learning! A new tractable model, CRCS, significantly improves utility inference from human data, outperforming existing methods.
Preference Learning Algorithms Do Not Learn Preference Rankings
·2930 words·14 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 New York University
Despite common belief, state-of-the-art preference learning algorithms for LLMs achieve surprisingly low ranking accuracy, highlighting significant flaws in current alignment techniques.
Preference Alignment with Flow Matching
·2735 words·13 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 KAIST AI
Preference Flow Matching (PFM) streamlines preference integration into pre-trained models using flow matching, overcoming fine-tuning limitations and enabling robust alignment with human preferences.
Predictor-Corrector Enhanced Transformers with Exponential Moving Average Coefficient Learning
·3127 words·15 mins·
loading
·
loading
AI Generated
Natural Language Processing
Machine Translation
🏢 Microsoft Research
PCformer boosts Transformer performance by using a predictor-corrector learning framework and exponential moving average coefficient learning for high-order prediction, achieving state-of-the-art resu…