Skip to main content

Posters

2024

EnOF-SNN: Training Accurate Spiking Neural Networks via Enhancing the Output Feature
·1417 words·7 mins· loading · loading
Machine Learning Deep Learning 🏢 Peking University
EnOF-SNN boosts spiking neural network (SNN) accuracy by enhancing output feature representation using a novel knowledge distillation method and ReLU activation, outperforming current state-of-the-art…
Enhancing Semi-Supervised Learning via Representative and Diverse Sample Selection
·1698 words·8 mins· loading · loading
Machine Learning Semi-Supervised Learning 🏢 Zhejiang University
RDSS: a novel sample selection method for semi-supervised learning, boosts model accuracy by minimizing a-MMD, striking a balance between sample representativeness and diversity.
Enhancing Robustness of Last Layer Two-Stage Fair Model Corrections
·2233 words·11 mins· loading · loading
AI Theory Fairness 🏢 Arizona State University
Boosting fair machine learning’s robustness against noisy labels, this work introduces a novel label-spreading method, achieving state-of-the-art worst-group accuracy.
Enhancing Robustness in Deep Reinforcement Learning: A Lyapunov Exponent Approach
·2344 words·12 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 University of Glasgow
Deep RL agents lack robustness; this paper enhances their resilience by implementing Maximal Lyapunov Exponent regularisation in the Dreamer V3 architecture, thus improving real-world applicability.
Enhancing Reasoning Capabilities of LLMs via Principled Synthetic Logic Corpus
·3384 words·16 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏢 Advanced AI Innovation Center, Hitachi
Boosting AI reasoning! New research enhances LLMs’ logical abilities via a principled synthetic logic corpus, achieving substantial improvements across logic, math, and coding benchmarks.
Enhancing Protein Mutation Effect Prediction through a Retrieval-Augmented Framework
·1980 words·10 mins· loading · loading
Machine Learning Deep Learning 🏢 Tsinghua University
Revolutionizing protein mutation effect prediction, this work introduces a retrieval-augmented framework achieving state-of-the-art accuracy by efficiently incorporating similar local structure inform…
Enhancing Multiple Dimensions of Trustworthiness in LLMs via Sparse Activation Control
·3239 words·16 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Zhejiang University
Boosting LLM trustworthiness, researchers introduce Sparse Activation Control, a training-free method that concurrently enhances safety, factuality, and bias mitigation by selectively controlling atte…
Enhancing Motion in Text-to-Video Generation with Decomposed Encoding and Conditioning
·2723 words·13 mins· loading · loading
Multimodal Learning Vision-Language Models 🏢 Hong Kong Polytechnic University
DEMO framework enhances text-to-video generation by decomposing text encoding and conditioning into content and motion components, resulting in videos with significantly improved motion dynamics.
Enhancing LLM’s Cognition via Structurization
·3694 words·18 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏢 Zhejiang University
LLMs struggle with complex, long-form text. This paper introduces ‘context structurization,’ transforming unstructured text into a structured format to enhance LLM comprehension. Experiments across …
Enhancing Large Vision Language Models with Self-Training on Image Comprehension
·3514 words·17 mins· loading · loading
AI Generated Natural Language Processing Vision-Language Models 🏢 UC Los Angeles
Self-Training on Image Comprehension (STIC) significantly boosts Large Vision Language Model (LVLM) performance using unlabeled image data. STIC generates a preference dataset for image descriptions …
Enhancing Large Language Models through Adaptive Tokenizers
·1963 words·10 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Huawei Noah's Ark Lab
Adaptive tokenizers enhance LLMs by dynamically optimizing vocabulary during training, improving accuracy without increasing vocabulary size.
Enhancing In-Context Learning Performance with just SVD-Based Weight Pruning: A Theoretical Perspective
·2209 words·11 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Renmin University of China
SVD-based weight pruning surprisingly boosts in-context learning in large language models, especially when applied to deeper layers, offering a novel approach to model compression and efficiency.
Enhancing Graph Transformers with Hierarchical Distance Structural Encoding
·3923 words·19 mins· loading · loading
AI Generated Machine Learning Representation Learning 🏢 Beihang University
Hierarchical Distance Structural Encoding (HDSE) empowers graph transformers to better capture hierarchical graph structures, leading to improved performance in graph classification and regression tas…
Enhancing Feature Diversity Boosts Channel-Adaptive Vision Transformers
·3757 words·18 mins· loading · loading
AI Generated Computer Vision Image Classification 🏢 Boston University
DiChaViT boosts channel-adaptive vision transformers by enhancing feature diversity, yielding a 1.5-5% accuracy gain over state-of-the-art MCI models on diverse datasets.
Enhancing Efficiency of Safe Reinforcement Learning via Sample Manipulation
·1946 words·10 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 UC Berkeley
ESPO enhances safe RL efficiency by dynamically manipulating sample size based on reward-safety gradient conflicts, ensuring faster training and superior performance.
Enhancing Domain Adaptation through Prompt Gradient Alignment
·2283 words·11 mins· loading · loading
Machine Learning Transfer Learning 🏢 New York University
Prompt Gradient Alignment (PGA) enhances unsupervised domain adaptation by aligning per-objective gradients in a multi-objective optimization framework, achieving state-of-the-art results.
Enhancing Diversity in Bayesian Deep Learning via Hyperspherical Energy Minimization of CKA
·3150 words·15 mins· loading · loading
Machine Learning Deep Learning 🏢 Oregon State University
Boosting Bayesian deep learning’s diversity and uncertainty quantification, this study proposes hyperspherical energy minimization of CKA to generate diverse and reliable neural network ensembles and …
Enhancing Consistency-Based Image Generation via Adversarialy-Trained Classification and Energy-Based Discrimination
·2128 words·10 mins· loading · loading
Computer Vision Image Generation 🏢 Technion
This paper introduces a novel post-processing technique that significantly boosts the perceptual quality of images generated by consistency models using a joint classifier-discriminator adversarially …
Enhancing Chess Reinforcement Learning with Graph Representation
·2930 words·14 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏢 Kyoto University
AlphaGateau: a novel Graph Neural Network architecture outperforms previous chess AI models by leveraging graph representations for faster training and superior generalization to different board sizes…
Energy-Based Modelling for Discrete and Mixed Data via Heat Equations on Structured Spaces
·2907 words·14 mins· loading · loading
Machine Learning Deep Learning 🏢 Imperial College London
Train discrete EBMs efficiently with Energy Discrepancy, a novel loss function that eliminates the need for Markov Chain Monte Carlo, using diffusion processes on structured spaces.
Buy Me A Coffee