Skip to main content

Posters

2024

ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution
·3978 words·19 mins· loading · loading
AI Theory Optimization 🏢 Peking University
ReEvo, a novel integration of evolutionary search and LLM reflections, generates state-of-the-art heuristics for combinatorial optimization problems, demonstrating superior sample efficiency.
REDUCR: Robust Data Downsampling using Class Priority Reweighting
·2544 words·12 mins· loading · loading
Machine Learning Deep Learning 🏢 University College London
REDUCR, a novel data downsampling method, significantly improves worst-class test accuracy in imbalanced datasets by using class priority reweighting, surpassing state-of-the-art methods by ~15%.
Reducing Transformer Key-Value Cache Size with Cross-Layer Attention
·2727 words·13 mins· loading · loading
Natural Language Processing Large Language Models 🏢 MIT CSAIL
Cross-Layer Attention (CLA) shrinks Transformer Key-Value cache 2x, improving LLMs’ memory efficiency without accuracy loss.
Recursive Introspection: Teaching Language Model Agents How to Self-Improve
·2681 words·13 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Carnegie Mellon University
RISE: Recursive Introspection teaches LLMs to iteratively improve their responses, enabling self-correction and enhanced performance on challenging reasoning tasks.
Recurrent Reinforcement Learning with Memoroids
·2207 words·11 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 University of Macau
Memoroids and Tape-Based Batching revolutionize recurrent RL, enabling efficient processing of long sequences and improving sample efficiency by eliminating segmentation.
Recurrent neural networks: vanishing and exploding gradients are not the end of the story
·2602 words·13 mins· loading · loading
AI Theory Optimization 🏢 ETH Zurich
Recurrent neural networks struggle with long-term memory due to a newly identified ‘curse of memory’: increasing parameter sensitivity with longer memory. This work provides insights into RNN optimiza…
Recurrent Complex-Weighted Autoencoders for Unsupervised Object Discovery
·2697 words·13 mins· loading · loading
Computer Vision Image Segmentation 🏢 Google DeepMind
SynCx, a novel recurrent autoencoder with complex weights, surpasses state-of-the-art models in unsupervised object discovery by iteratively refining phase relationships to achieve robust object bindi…
RectifID: Personalizing Rectified Flow with Anchored Classifier Guidance
·2658 words·13 mins· loading · loading
Computer Vision Image Generation 🏢 Peking University
RectifID personalizes image generation by cleverly guiding a diffusion model using off-the-shelf classifiers, achieving identity preservation without needing extra training data.
Recovering Complete Actions for Cross-dataset Skeleton Action Recognition
·2959 words·14 mins· loading · loading
Computer Vision Action Recognition 🏢 Tsinghua University
Boost skeleton action recognition accuracy across datasets by recovering complete actions and resampling; outperforms existing methods.
Reconstruction of Manipulated Garment with Guided Deformation Prior
·2931 words·14 mins· loading · loading
Computer Vision 3D Vision 🏢 Computer Vision Lab, EPFL
Researchers developed a novel method for reconstructing the 3D shape of manipulated garments, achieving superior accuracy compared to existing techniques, particularly for complex, non-rigid deformati…
Reconstruction Attacks on Machine Unlearning: Simple Models are Vulnerable
·2340 words·11 mins· loading · loading
AI Theory Privacy 🏢 Amazon
Deleting data from machine learning models exposes individuals to highly accurate reconstruction attacks, even when models are simple; this research demonstrates the vulnerability.
Reconstructing the Image Stitching Pipeline: Integrating Fusion and Rectangling into a Unified Inpainting Model
·2463 words·12 mins· loading · loading
Computer Vision Image Generation 🏢 College of Computer Science and Technology, Tongji University
SRStitcher revolutionizes image stitching by integrating fusion and rectangling into a unified inpainting model, eliminating model training and achieving superior performance and stability.
Recognize Any Regions
·2350 words·12 mins· loading · loading
AI Generated Multimodal Learning Vision-Language Models 🏢 University of Surrey
RegionSpot efficiently integrates pretrained localization and vision-language models for superior open-world object recognition, achieving significant performance gains with minimal training.
Reciprocal Reward Influence Encourages Cooperation From Self-Interested Agents
·1896 words·9 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 UC Los Angeles
Reciprocators: AI agents that learn to cooperate by reciprocating influence, achieving prosocial outcomes in complex scenarios.
Reciprocal Learning
·3277 words·16 mins· loading · loading
AI Generated Machine Learning Active Learning 🏢 LMU Munich
Numerous machine learning algorithms are unified under the novel paradigm of reciprocal learning, proven to converge at linear rates under specific conditions, enhancing sample efficiency.
REBORN: Reinforcement-Learned Boundary Segmentation with Iterative Training for Unsupervised ASR
·2781 words·14 mins· loading · loading
AI Generated Natural Language Processing Speech Recognition 🏢 National Taiwan University
REBORN: An iterative training framework significantly improves unsupervised ASR by learning optimal speech segment boundaries using reinforcement learning, outperforming existing methods.
REBEL: Reinforcement Learning via Regressing Relative Rewards
·2652 words·13 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 Cornell University
REBEL, a novel reinforcement learning algorithm, simplifies policy optimization by regressing relative rewards, achieving strong performance in language and image generation tasks with increased effic…
Reawakening knowledge: Anticipatory recovery from catastrophic interference via structured training
·2387 words·12 mins· loading · loading
Natural Language Processing Large Language Models 🏢 New York University
Overparameterized neural networks surprisingly recover from catastrophic interference when trained cyclically on repeated data sequences, exhibiting anticipatory knowledge reactivation.
Reasons and Solutions for the Decline in Model Performance after Editing
·2167 words·11 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Peking University
Boosting large language model performance after knowledge editing: A new method (D4S) minimizes model damage by regulating the explosive growth of parameter layers, enabling multiple effective edits.
Reasoning Multi-Agent Behavioral Topology for Interactive Autonomous Driving
·4093 words·20 mins· loading · loading
AI Generated AI Applications Autonomous Vehicles 🏢 Nanyang Technological University
BeTopNet uses braid theory to create a topological representation of multi-agent future driving behaviors, improving prediction and planning accuracy in autonomous driving systems.