Posters
2024
Self-Taught Recognizer: Toward Unsupervised Adaptation for Speech Foundation Models
·2366 words·12 mins·
loading
·
loading
Natural Language Processing
Speech Recognition
🏢 NVIDIA Research
STAR, a novel unsupervised adaptation framework, drastically improves automatic speech recognition (ASR) robustness across diverse domains using only unlabeled data and outperforms existing self-train…
Self-supervised Transformation Learning for Equivariant Representations
·2895 words·14 mins·
loading
·
loading
AI Generated
Machine Learning
Self-Supervised Learning
🏢 Korea Advanced Institute of Science and Technology (KAIST)
Self-Supervised Transformation Learning (STL) enhances equivariant representations by replacing transformation labels with image-pair-derived representations, improving performance on diverse classifi…
Self-Supervised Alignment with Mutual Information: Learning to Follow Principles without Preference Labels
·5609 words·27 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
🏢 Stanford University
SAMI: Self-Supervised Alignment with Mutual Information, effectively teaches language models to follow principles without human preference labels by maximizing the mutual information between principle…
Self-Supervised Adversarial Training via Diverse Augmented Queries and Self-Supervised Double Perturbation
·2025 words·10 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
🏢 Institute of Computing Technology, Chinese Academy of Sciences
DAQ-SDP enhances self-supervised adversarial training by using diverse augmented queries, a self-supervised double perturbation scheme, and a novel Aug-Adv Pairwise-BatchNorm method, bridging the gap …
Self-Retrieval: End-to-End Information Retrieval with One Large Language Model
·2148 words·11 mins·
loading
·
loading
AI Generated
Natural Language Processing
Information Retrieval
🏢 Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences
Self-Retrieval revolutionizes information retrieval by unifying indexing, retrieval, and reranking within a single large language model, achieving significantly improved performance.
Self-Refining Diffusion Samplers: Enabling Parallelization via Parareal Iterations
·2449 words·12 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Stanford University
Self-Refining Diffusion Samplers (SRDS) dramatically speeds up diffusion model sampling by leveraging Parareal iterations for parallel-in-time computation, maintaining high-quality outputs.
Self-playing Adversarial Language Game Enhances LLM Reasoning
·2197 words·11 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Tencent AI Lab
Self-play adversarial language game boosts LLM reasoning!
Self-Play Fine-tuning of Diffusion Models for Text-to-image Generation
·4025 words·19 mins·
loading
·
loading
AI Generated
Computer Vision
Image Generation
🏢 UC Los Angeles
Self-Play Fine-Tuning (SPIN-Diffusion) revolutionizes diffusion model training, achieving superior text-to-image results with less data via iterative self-improvement, surpassing supervised and RLHF m…
Self-Labeling the Job Shop Scheduling Problem
·2214 words·11 mins·
loading
·
loading
AI Generated
Machine Learning
Self-Supervised Learning
🏢 University of Modena and Reggio Emilia
Self-Labeling Improves Generative Model Training for Combinatorial Problems
Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments
·2758 words·13 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
🏢 University of Cambridge
Self-healing machine learning (SHML) autonomously diagnoses and fixes model performance degradation caused by data shifts, outperforming reason-agnostic methods.
Self-Guiding Exploration for Combinatorial Problems
·2441 words·12 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 MBZUAI
LLMs excel at reasoning tasks, but their application to combinatorial problems (CPs) is underexplored. This paper introduces Self-Guiding Exploration (SGE), a novel prompting strategy that significan…
Self-Guided Masked Autoencoder
·3698 words·18 mins·
loading
·
loading
AI Generated
Computer Vision
Self-Supervised Learning
🏢 Seoul National University
Self-guided MAE boosts self-supervised learning by intelligently masking image patches based on internal clustering patterns, dramatically accelerating training without external data.
Self-Distilled Depth Refinement with Noisy Poisson Fusion
·2691 words·13 mins·
loading
·
loading
Computer Vision
3D Vision
🏢 Huazhong University of Science and Technology
Self-Distilled Depth Refinement (SDDR) tackles noisy depth maps via a novel noisy Poisson fusion approach, achieving significant improvements in depth accuracy and edge quality.
SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
·2441 words·12 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Google DeepMind
LLMs self-discover optimal reasoning structures for complex problems, boosting performance by up to 32% compared to existing methods.
Self-Calibrating Conformal Prediction
·2092 words·10 mins·
loading
·
loading
AI Applications
Healthcare
🏢 University of Washington
Self-Calibrating Conformal Prediction (SC-CP) marries model calibration and conformal prediction for more efficient and interpretable prediction intervals with prediction-conditional validity.
Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection
·2209 words·11 mins·
loading
·
loading
Multimodal Learning
Vision-Language Models
🏢 Shanghai Jiao Tong University
Self-Calibrated Tuning (SCT) enhances vision-language model OOD detection by adaptively weighting OOD regularization based on prediction uncertainty, mitigating issues caused by inaccurate feature ext…
Selective Attention: Enhancing Transformer through Principled Context Control
·2002 words·10 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 University of Michigan
Enhance Transformer models via Selective Self-Attention (SSA), a principled context control method that boosts accuracy and efficiency.
SelectIT: Selective Instruction Tuning for LLMs via Uncertainty-Aware Self-Reflection
·3120 words·15 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China
SelectIT leverages LLMs’ intrinsic uncertainty to efficiently select high-quality instruction tuning data, enhancing model performance without extra resources.
SEL-BALD: Deep Bayesian Active Learning for Selective Labeling with Instance Rejection
·2048 words·10 mins·
loading
·
loading
Machine Learning
Active Learning
🏢 University of Texas at Dallas
SEL-BALD tackles the challenge of human discretion in active learning by proposing novel algorithms that account for instance rejection, significantly boosting sample efficiency.
Segmenting Watermarked Texts From Language Models
·2577 words·13 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
🏢 Texas A&M University
This paper presents novel statistical methods to reliably watermark and segment LLMs-generated text, ensuring source traceability even after user modifications.