Posters
2024
PuLID: Pure and Lightning ID Customization via Contrastive Alignment
·3805 words·18 mins·
loading
·
loading
AI Generated
Computer Vision
Image Generation
🏢 ByteDance Inc.
PuLID: Lightning-fast, tuning-free ID customization for text-to-image!
Public-data Assisted Private Stochastic Optimization: Power and Limitations
·337 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
🏢 Meta
Leveraging public data enhances differentially private (DP) learning, but its limits are unclear. This paper establishes tight theoretical bounds for DP stochastic convex optimization, revealing when …
PTQ4DiT: Post-training Quantization for Diffusion Transformers
·2510 words·12 mins·
loading
·
loading
AI Generated
Computer Vision
Image Generation
🏢 University of Illinois Chicago
PTQ4DiT achieves 8-bit and even 4-bit weight precision for Diffusion Transformers, significantly improving efficiency for image generation without sacrificing quality.
PSL: Rethinking and Improving Softmax Loss from Pairwise Perspective for Recommendation
·2716 words·13 mins·
loading
·
loading
AI Applications
Recommendation Systems
🏢 Zhejiang University
Pairwise Softmax Loss (PSL) improves recommendation accuracy by enhancing Softmax Loss (SL) with alternative activation functions, resulting in tighter ranking metric surrogates and better noise resis…
Pseudo-Siamese Blind-spot Transformers for Self-Supervised Real-World Denoising
·2358 words·12 mins·
loading
·
loading
AI Generated
Computer Vision
Image Denoising
🏢 South China University of Technology
SelfFormer: A novel self-supervised transformer-based method outperforms existing techniques by leveraging directional self-attention for efficient and accurate real-world image denoising.
Pseudo-Private Data Guided Model Inversion Attacks
·4550 words·22 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
🏢 University of Texas at Austin
Pseudo-Private Data Guided Model Inversion (PPDG-MI) significantly improves model inversion attacks by dynamically tuning the generative model to increase the sampling probability of actual private da…
Pruning neural network models for gene regulatory dynamics using data and domain knowledge
·3492 words·17 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 Harvard University
DASH: a novel pruning framework leverages domain knowledge to improve the interpretability and sparsity of neural network models for gene regulatory dynamics, outperforming existing methods.
Prune and Repaint: Content-Aware Image Retargeting for any Ratio
·2137 words·11 mins·
loading
·
loading
Computer Vision
Image Generation
🏢 Southeast University
Prune and Repaint: A new content-aware method for superior image retargeting across any aspect ratio, preserving key features and avoiding artifacts.
ProxyFusion: Face Feature Aggregation Through Sparse Experts
·2043 words·10 mins·
loading
·
loading
AI Generated
Computer Vision
Face Recognition
🏢 University at Buffalo
ProxyFusion, a novel face feature fusion method, achieves real-time performance by using sparse experts to weight features without relying on intermediate representations or metadata, substantially im…
Proximal Causal Inference With Text Data
·4077 words·20 mins·
loading
·
loading
AI Theory
Causality
🏢 Johns Hopkins University
Unmeasured confounders hinder causal inference; this paper introduces a novel method using two pre-treatment text instances and zero-shot models to infer proxies for unobserved confounders, enabling p…
ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Field
·1770 words·9 mins·
loading
·
loading
Computer Vision
3D Vision
🏢 Stanford University
ProvNeRF enhances NeRF reconstruction by modeling per-point provenance as a stochastic field, improving novel view synthesis and uncertainty estimation, particularly in sparse, unconstrained view sett…
Proving Theorems Recursively
·2409 words·12 mins·
loading
·
loading
AI Theory
Optimization
🏢 University of Edinburgh
POETRY: a recursive neural theorem prover achieving 5.1% higher success rate and solving substantially longer proofs.
Provably Transformers Harness Multi-Concept Word Semantics for Efficient In-Context Learning
·1305 words·7 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Department of Computer Science, City University of Hong Kong
Transformers excel at in-context learning (ICL), solving new tasks with just prompts. This paper provides a mathematical explanation, showing how transformers use multi-concept word semantics to achie…
Provably Safe Neural Network Controllers via Differential Dynamic Logic
·2824 words·14 mins·
loading
·
loading
AI Theory
Safety
🏢 Karlsruhe Institute of Technology
Verifiably safe AI controllers are created via a novel framework, VerSAILLE, which uses differential dynamic logic and open-loop NN verification to prove safety for unbounded time horizons.
Provably Robust Score-Based Diffusion Posterior Sampling for Plug-and-Play Image Reconstruction
·1596 words·8 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Carnegie Mellon University
Provably robust diffusion posterior sampling for plug-and-play image reconstruction is achieved via a novel algorithmic framework, DPnP, offering both asymptotic and non-asymptotic performance guarant…
Provably Optimal Memory Capacity for Modern Hopfield Models: Transformer-Compatible Dense Associative Memories as Spherical Codes
·1812 words·9 mins·
loading
·
loading
AI Theory
Representation Learning
🏢 Northwestern University
Researchers achieve provably optimal memory capacity in transformer-compatible Hopfield models by framing the problem as an optimal spherical code arrangement, resulting in a novel sublinear time algo…
Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer
·1760 words·9 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Northwestern University
RLHF’s overoptimization problem is mitigated by RPO, a novel algorithm that uses SFT loss as an implicit adversarial regularizer, ensuring efficient and effective LLM alignment.
Provably Faster Algorithms for Bilevel Optimization via Without-Replacement Sampling
·1398 words·7 mins·
loading
·
loading
Machine Learning
Optimization
🏢 University of Maryland College Park
Faster bilevel optimization is achieved via without-replacement sampling, improving convergence rates compared to independent sampling methods.
Provably Efficient Reinforcement Learning with Multinomial Logit Function Approximation
·328 words·2 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 National Key Laboratory for Novel Software Technology, Nanjing University
This paper presents novel RL algorithms using multinomial logit function approximation, achieving O(1) computation and storage while nearly closing the regret gap with linear methods.
Provably Efficient Interactive-Grounded Learning with Personalized Reward
·427 words·3 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 University of Iowa
Provably efficient algorithms are introduced for interaction-grounded learning (IGL) with context-dependent feedback, addressing the lack of theoretical guarantees in existing approaches for personali…