Posters
2024
In-Trajectory Inverse Reinforcement Learning: Learn Incrementally From An Ongoing Trajectory
·1427 words·7 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Pennsylvania State University
MERIT-IRL: First in-trajectory IRL framework learns reward & policy incrementally from ongoing trajectories, guaranteeing sub-linear regret.
In-N-Out: Lifting 2D Diffusion Prior for 3D Object Removal via Tuning-Free Latents Alignment
·2437 words·12 mins·
loading
·
loading
Computer Vision
3D Vision
🏢 University of Melbourne
In-N-Out: Lifting 2D Diffusion Priors for 3D Object Removal via Tuning-Free Latents Alignment enhances 3D scene reconstruction by aligning 2D diffusion model latents for consistent multi-view inpainti…
In-Context Symmetries: Self-Supervised Learning through Contextual World Models
·3570 words·17 mins·
loading
·
loading
Computer Vision
Self-Supervised Learning
🏢 MIT CSAIL
CONTEXTSSL: A novel self-supervised learning algorithm that adapts to task-specific symmetries by using context, achieving significant performance gains over existing methods.
In-Context Learning with Representations: Contextual Generalization of Trained Transformers
·1880 words·9 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Carnegie Mellon University
Transformers learn contextual information for generalization to unseen examples and tasks, even with limited training data, converging linearly to a global minimum.
In-Context Learning State Vector with Inner and Momentum Optimization
·2182 words·11 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Harbin Institute of Technology (Shenzhen)
This paper introduces inner and momentum optimization to enhance the state vector for in-context learning, improving performance and scalability in LLMs.
In-Context Learning of a Linear Transformer Block: Benefits of the MLP Component and One-Step GD Initialization
·436 words·3 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
🏢 UC Berkeley
Linear Transformer Blocks (LTBs) achieve near-optimal in-context learning (ICL) for linear regression by effectively implementing one-step gradient descent with learnable initialization, a significant…
In Pursuit of Causal Label Correlations for Multi-label Image Recognition
·2377 words·12 mins·
loading
·
loading
Computer Vision
Image Classification
🏢 Wenzhou University
This research leverages causal intervention to identify and utilize genuine label correlations in multi-label image recognition, mitigating contextual bias for improved accuracy.
Improving Viewpoint-Independent Object-Centric Representations through Active Viewpoint Selection
·2484 words·12 mins·
loading
·
loading
Computer Vision
Image Segmentation
🏢 School of Computer Science, Fudan University
Active Viewpoint Selection (AVS) significantly improves viewpoint-independent object-centric representations by actively selecting the most informative viewpoints for each scene, leading to better seg…
Improving the Training of Rectified Flows
·4681 words·22 mins·
loading
·
loading
AI Generated
Computer Vision
Image Generation
🏢 Carnegie Mellon University
Researchers significantly boosted the efficiency and quality of rectified flow, a method for generating samples from diffusion models, by introducing novel training techniques that surpass state-of-th…
Improving the Learning Capability of Small-size Image Restoration Network by Deep Fourier Shifting
·1825 words·9 mins·
loading
·
loading
Computer Vision
Image Restoration
🏢 AIRI
Deep Fourier Shifting boosts small image restoration networks by using an information-lossless Fourier cycling shift operator, improving performance across various low-level tasks while reducing compu…
Improving Temporal Link Prediction via Temporal Walk Matrix Projection
·2541 words·12 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 CCSE Lab, Beihang University
TPNet boosts temporal link prediction accuracy and efficiency by unifying relative encodings via temporal walk matrices and using random feature propagation.
Improving Subgroup Robustness via Data Selection
·1691 words·8 mins·
loading
·
loading
AI Theory
Robustness
🏢 MIT
Data Debiasing with Datamodels (D3M) efficiently improves machine learning model robustness by identifying and removing specific training examples that disproportionately harm minority groups’ accurac…
Improving Sparse Decomposition of Language Model Activations with Gated Sparse Autoencoders
·4021 words·19 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Google DeepMind
Gated Sparse Autoencoders (GSAEs) achieve Pareto improvement over baseline SAEs for unsupervised feature discovery in language models, resolving the shrinkage bias of L1 penalty by separating feature …
Improving self-training under distribution shifts via anchored confidence with theoretical guarantees
·2507 words·12 mins·
loading
·
loading
Machine Learning
Semi-Supervised Learning
🏢 Northwestern University
Anchored Confidence (AnCon) significantly improves self-training under distribution shifts by using a temporal ensemble to smooth noisy pseudo-labels, achieving 8-16% performance gains without computa…
Improving Robustness of 3D Point Cloud Recognition from a Fourier Perspective
·2312 words·11 mins·
loading
·
loading
Computer Vision
3D Vision
🏢 Chinese Academy of Sciences
Boosting 3D point cloud recognition robustness, Frequency Adversarial Training (FAT) leverages frequency-domain adversarial examples to improve model resilience against corruptions, achieving state-of…
Improving Neural ODE Training with Temporal Adaptive Batch Normalization
·3052 words·15 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 Hong Kong University of Science and Technology
Boosting Neural ODE training, Temporal Adaptive Batch Normalization (TA-BN) resolves traditional Batch Normalization’s limitations by providing a continuous-time counterpart, enabling deeper networks …
Improving Neural Network Surface Processing with Principal Curvatures
·2900 words·14 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 Inria
Boosting neural network surface processing: Using principal curvatures as input significantly improves segmentation and classification accuracy while reducing computational overhead.
Improving Linear System Solvers for Hyperparameter Optimisation in Iterative Gaussian Processes
·3448 words·17 mins·
loading
·
loading
Machine Learning
Gaussian Processes
🏢 University of Cambridge
Accelerate Gaussian process hyperparameter optimization by up to 72x using novel linear system solver techniques.
Improving Gloss-free Sign Language Translation by Reducing Representation Density
·3386 words·16 mins·
loading
·
loading
AI Generated
Natural Language Processing
Machine Translation
🏢 Tencent AI Lab
SignCL, a novel contrastive learning strategy, significantly boosts gloss-free sign language translation by mitigating representation density, achieving substantial performance gains.
Improving Generalization of Dynamic Graph Learning via Environment Prompt
·2632 words·13 mins·
loading
·
loading
AI Applications
Smart Cities
🏢 University of Science and Technology of China
EpoD, a novel dynamic graph learning model, significantly improves generalization via a self-prompted learning mechanism for environment inference and a structural causal model utilizing dynamic subgr…