Posters
2024
Quantum Algorithms for Non-smooth Non-convex Optimization
·360 words·2 mins·
loading
·
loading
AI Theory
Optimization
π’ Chinese University of Hong Kong
Quantum algorithms achieve speedups in non-smooth, non-convex optimization, outperforming classical methods by a factor of Ξ΅β»Β²/Β³ in query complexity for finding (Ξ΄,Ξ΅)-Goldstein stationary points.
Quantum algorithm for large-scale market equilibrium computation
·643 words·4 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Centre for Quantum Technologies, National University of Singapore
Quantum speedup achieved for large-scale market equilibrium computation!
Quantitative Convergences of Lie Group Momentum Optimizers
·1602 words·8 mins·
loading
·
loading
Machine Learning
Optimization
π’ Georgia Institute of Technology
Accelerated Lie group optimization achieved via a novel momentum algorithm (Lie NAG-SC) with proven convergence rates, surpassing existing methods in efficiency.
Quantifying the Gain in Weak-to-Strong Generalization
·2368 words·12 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
π’ Stanford University
Weakly supervised strong models outperform weak models; this gain is precisely quantified by the strong model’s misfit error on weak labels.
Quantifying and Optimizing Global Faithfulness in Persona-driven Role-playing
·2570 words·13 mins·
loading
·
loading
Natural Language Processing
Dialogue Systems
π’ UC San Diego
New APC metric precisely quantifies & optimizes global faithfulness in persona-driven role-playing, offering a fine-grained, explainable evaluation and improving AI character consistency.
Quantifying Aleatoric Uncertainty of the Treatment Effect: A Novel Orthogonal Learner
·2359 words·12 mins·
loading
·
loading
AI Theory
Causality
π’ LMU Munich
New orthogonal learner quantifies treatment effect’s randomness, providing sharper insights beyond average effects.
QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor Adaptation
·3333 words·16 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
π’ MIT
QuanTA: Quantum-inspired Tensor Adaptation efficiently fine-tunes LLMs with high-rank updates, surpassing low-rank methods like LoRA for complex tasks while minimizing additional parameters.
Quality-Improved and Property-Preserved Polarimetric Imaging via Complementarily Fusing
·1809 words·9 mins·
loading
·
loading
Computer Vision
Image Enhancement
π’ Peking University
This paper introduces a novel three-phase neural network framework that significantly enhances the quality of polarimetric images by complementarily fusing degraded noisy and blurry snapshots, preserv…
Qualitative Mechanism Independence
·1560 words·8 mins·
loading
·
loading
AI Theory
Causality
π’ Cornell University
Researchers introduce QIM-compatibility, a novel framework for modeling qualitative relationships in probability distributions using directed hypergraphs, significantly expanding beyond standard condi…
Quadratic Quantum Variational Monte Carlo
·1669 words·8 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Texas at Austin
Q2VMC, a novel quantum chemistry algorithm, drastically boosts the efficiency and accuracy of solving the SchrΓΆdinger equation using a quadratic update mechanism and neural network ansatzes.
QuadMamba: Learning Quadtree-based Selective Scan for Visual State Space Model
·2714 words·13 mins·
loading
·
loading
AI Generated
Computer Vision
Image Classification
π’ Shanghai Jiao Tong University
QuadMamba: A novel vision model leveraging quadtree-based scanning for superior performance in visual tasks, achieving state-of-the-art results with linear-time complexity.
QT-ViT: Improving Linear Attention in ViT with Quadratic Taylor Expansion
·1611 words·8 mins·
loading
·
loading
AI Generated
Computer Vision
Image Classification
π’ Advanced Micro Devices, Inc.
QT-ViT boosts Vision Transformer efficiency by using quadratic Taylor expansion to approximate self-attention, achieving state-of-the-art accuracy and speed.
QGFN: Controllable Greediness with Action Values
·3928 words·19 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
π’ Hong Kong University of Science and Technology
QGFN boosts Generative Flow Networks (GFNs) by cleverly combining their sampling policy with an action-value estimate, creating controllable and efficient generation of high-reward samples.
QBB: Quantization with Binary Bases for LLMs
·1816 words·9 mins·
loading
·
loading
Natural Language Processing
Large Language Models
π’ Samsung AI Cambridge
QBB: A novel post-training quantization method for LLMs dramatically improves efficiency by replacing multiplications with summations, achieving state-of-the-art results with minimal accuracy loss.
Q-VLM: Post-training Quantization for Large Vision-Language Models
·2070 words·10 mins·
loading
·
loading
Multimodal Learning
Vision-Language Models
π’ Tsinghua University
Q-VLM: A novel post-training quantization framework significantly compresses large vision-language models, boosting inference speed without sacrificing accuracy.
Q-Distribution guided Q-learning for offline reinforcement learning: Uncertainty penalized Q-value via consistency model
·4297 words·21 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
π’ Hong Kong University of Science and Technology
Offline RL struggles with OOD action overestimation. QDQ tackles this by penalizing uncertain Q-values using a consistency model, enhancing offline RL performance.
Putting Gale & Shapley to Work: Guaranteeing Stability Through Learning
·1809 words·9 mins·
loading
·
loading
AI Theory
Optimization
π’ Penn State University
Researchers improve two-sided matching market algorithms by prioritizing stability through novel bandit-learning algorithms, providing theoretical bounds on sample complexity and demonstrating intrigu…
PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics
·3344 words·16 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ UC Los Angeles
PUREGEN uses generative model dynamics to purify poisoned training data, providing a universal, effective, and efficient train-time defense against various data poisoning attacks.
PURE: Prompt Evolution with Graph ODE for Out-of-distribution Fluid Dynamics Modeling
·2009 words·10 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Tencent
PURE: A novel method uses Graph ODE to adapt spatio-temporal forecasting models to various fluid dynamics scenarios, improving model adaptation to unseen parameters and long-term predictions.
Pure Message Passing Can Estimate Common Neighbor for Link Prediction
·2519 words·12 mins·
loading
·
loading
Machine Learning
Representation Learning
π’ Computer Science and Engineering, University of Notre Dame
Pure message passing in graph neural networks can accurately estimate common neighbor heuristics for superior link prediction.