Posters
2024
Predictive Attractor Models
·4524 words·22 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 University of South Florida
Predictive Attractor Models (PAM) offer a biologically-plausible, streaming sequence memory architecture that avoids catastrophic forgetting and generates multiple future possibilities.
Prediction-Powered Ranking of Large Language Models
·4368 words·21 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
🏢 Max Planck Institute for Software Systems
This paper presents a novel statistical framework for ranking LLMs using pairwise comparisons, accounting for the uncertainty introduced when using an LLM instead of human preferences. The framework …
Prediction with Action: Visual Policy Learning via Joint Denoising Process
·2466 words·12 mins·
loading
·
loading
AI Applications
Robotics
🏢 Tsinghua University
PAD, a novel visual policy learning framework, unifies image prediction and robot action in a joint denoising process, achieving significant performance improvements in robotic manipulation tasks.
Predicting the Performance of Foundation Models via Agreement-on-the-Line
·4845 words·23 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Carnegie Mellon University
Foundation model OOD performance prediction is reliably achieved via ensemble diversity, especially through random linear head initialization, enabling precise estimations without extensive OOD labels…
Predicting Label Distribution from Ternary Labels
·2190 words·11 mins·
loading
·
loading
AI Generated
Machine Learning
Label Distribution Learning
🏢 Nanjing University of Science and Technology
Boosting label distribution learning accuracy and efficiency, this research proposes using ternary labels instead of binary labels to predict label distributions, thus enhancing annotation efficiency …
Predicting Ground State Properties: Constant Sample Complexity and Deep Learning Algorithms
·1574 words·8 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 University of Cambridge
Deep learning algorithms now predict quantum ground state properties with constant sample complexity, regardless of system size, improving upon previous methods.
Predicting Future Actions of Reinforcement Learning Agents
·1902 words·9 mins·
loading
·
loading
AI Applications
Robotics
🏢 University of Cambridge
Predicting RL agent behavior is key for safety and interaction; this study reveals that explicitly planned agents are significantly easier to predict due to their internal plans.
Precise asymptotics of reweighted least-squares algorithms for linear diagonal networks
·1447 words·7 mins·
loading
·
loading
Machine Learning
Optimization
🏢 Georgia Institute of Technology
New analysis reveals how reweighted least-squares algorithms for linear diagonal networks achieve favorable performance in high-dimensional settings, improving upon existing theoretical guarantees and…
Precipitation Downscaling with Spatiotemporal Video Diffusion
·3419 words·17 mins·
loading
·
loading
AI Generated
AI Applications
Healthcare
🏢 UC Irvine
SpatioTemporal Video Diffusion (STVD) revolutionizes precipitation downscaling using a novel diffusion model, achieving state-of-the-art accuracy in super-resolution while capturing crucial distributi…
Pre-training Differentially Private Models with Limited Public Data
·3129 words·15 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 Amazon
Researchers achieved high-accuracy differentially private (DP) models by using a novel DP continual pre-training strategy with only 10% public data, mitigating the performance degradation common in DP…
Pre-Trained Multi-Goal Transformers with Prompt Optimization for Efficient Online Adaptation
·2369 words·12 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Peking University
MGPO: Efficient online RL adaptation via prompt optimization of pre-trained multi-goal transformers.
Pre-trained Large Language Models Use Fourier Features to Compute Addition
·7726 words·37 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
🏢 UC Los Angeles
Pre-trained LLMs surprisingly use Fourier features to perform addition, with MLP layers approximating magnitude and attention layers handling modular arithmetic; this mechanism requires pre-training.
Practical Shuffle Coding
·2298 words·11 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 University College London
Revolutionizing unordered data compression, this paper introduces autoregressive shuffle coding, achieving state-of-the-art speeds and compression rates on massive datasets.
Practical Bayesian Algorithm Execution via Posterior Sampling
·2028 words·10 mins·
loading
·
loading
AI Generated
Machine Learning
Active Learning
🏢 California Institute of Technology
PS-BAX, a novel Bayesian algorithm execution method using posterior sampling, efficiently selects evaluation points for complex tasks, outperforming existing methods in speed and scalability.
Practical $0.385$-Approximation for Submodular Maximization Subject to a Cardinality Constraint
·2921 words·14 mins·
loading
·
loading
AI Generated
AI Applications
Revenue Maximization
🏢 DataHeroes Israel
A novel algorithm achieves a 0.385-approximation for submodular maximization under cardinality constraints, combining strong theoretical guarantees with practical query complexity.
PPLNs: Parametric Piecewise Linear Networks for Event-Based Temporal Modeling and Beyond
·1984 words·10 mins·
loading
·
loading
Computer Vision
3D Vision
🏢 University of Texas at Austin
Parametric Piecewise Linear Networks (PPLNs) achieve state-of-the-art results in event-based and frame-based computer vision tasks by mimicking biological neural principles.
PowerPM: Foundation Model for Power Systems
·2167 words·11 mins·
loading
·
loading
AI Applications
Smart Cities
🏢 Zhejiang University
PowerPM: A foundation model revolutionizing power system analysis by mastering complex ETS data through a novel self-supervised pre-training approach, achieving state-of-the-art performance.
Posture-Informed Muscular Force Learning for Robust Hand Pressure Estimation
·3470 words·17 mins·
loading
·
loading
AI Applications
Human-AI Interaction
🏢 Graduate School of Culture Technology, KAIST
PiMForce: Robust hand pressure estimation using 3D hand posture and sEMG!
Post-Hoc Reversal: Are We Selecting Models Prematurely?
·2661 words·13 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Stanford University
Post-hoc model transformations can reverse performance trends, prompting a reevaluation of model selection strategies and suggesting a new ‘post-hoc selection’ method for improved model development.
Position Coupling: Improving Length Generalization of Arithmetic Transformers Using Task Structure
·15685 words·74 mins·
loading
·
loading
AI Generated
AI Theory
Generalization
🏢 Google Research
Position coupling, a novel method, enhances the length generalization ability of arithmetic Transformers by directly embedding task structures into positional encodings. This simple technique enables…