š¢ Seoul National University
Textual Training for the Hassle-Free Removal of Unwanted Visual Data: Case Studies on OOD and Hateful Image Detection
·2145 words·11 mins·
loading
·
loading
Multimodal Learning
Vision-Language Models
š¢ Seoul National University
Hassle-Free Textual Training (HFTT) uses only textual data to effectively remove unwanted visual data from AI training datasets, significantly reducing human annotation needs.
Spectral-Risk Safe Reinforcement Learning with Convergence Guarantees
·2502 words·12 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
š¢ Seoul National University
SRCPO: a novel spectral risk measure-constrained RL algorithm guaranteeing convergence to a global optimum, outperforming existing methods in continuous control tasks.
Self-Guided Masked Autoencoder
·3698 words·18 mins·
loading
·
loading
AI Generated
Computer Vision
Self-Supervised Learning
š¢ Seoul National University
Self-guided MAE boosts self-supervised learning by intelligently masking image patches based on internal clustering patterns, dramatically accelerating training without external data.
Sample Selection via Contrastive Fragmentation for Noisy Label Regression
·6755 words·32 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
š¢ Seoul National University
ConFrag, a novel approach to noisy label regression, leverages contrastive fragmentation and neighborhood agreement to select clean samples, significantly outperforming state-of-the-art baselines on s…
Randomized Exploration for Reinforcement Learning with Multinomial Logistic Function Approximation
·543 words·3 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
š¢ Seoul National University
First provably efficient randomized RL algorithms using multinomial logistic function approximation are introduced, achieving superior performance and constant-time computational cost.
Queueing Matching Bandits with Preference Feedback
·1365 words·7 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
š¢ Seoul National University
Novel algorithms stabilize multi-server queueing systems with unknown service rates, achieving sublinear regret by learning server preferences via preference feedback.
Paralinguistics-Aware Speech-Empowered Large Language Models for Natural Conversation
·2883 words·14 mins·
loading
·
loading
AI Generated
Natural Language Processing
Dialogue Systems
š¢ Seoul National University
Unified Spoken Dialog Model (USDM) directly generates coherent spoken responses with natural prosody, surpassing cascaded baselines and enhancing natural conversation in speech-enabled LLMs.
Nearly Minimax Optimal Regret for Multinomial Logistic Bandit
·1353 words·7 mins·
loading
·
loading
AI Theory
Optimization
š¢ Seoul National University
This paper presents OFU-MNL+, a constant-time algorithm achieving nearly minimax optimal regret for contextual multinomial logistic bandits, closing the gap between existing upper and lower bounds.
Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models
·1888 words·9 mins·
loading
·
loading
Natural Language Processing
Large Language Models
š¢ Seoul National University
BinaryMoS: a novel token-adaptive binarization method that boosts LLM accuracy and efficiency by dynamically merging multiple scaling experts for each token.
Mitigating Spurious Correlations via Disagreement Probability
·2000 words·10 mins·
loading
·
loading
AI Generated
AI Theory
Fairness
š¢ Seoul National University
DPR, a novel bias mitigation method, robustly improves model performance by leveraging disagreement probability without needing bias labels, achieving state-of-the-art results.
Introducing Spectral Attention for Long-Range Dependency in Time Series Forecasting
·3194 words·15 mins·
loading
·
loading
Machine Learning
Deep Learning
š¢ Seoul National University
Spectral Attention boosts long-range dependency capture in time series forecasting, achieving state-of-the-art results across various models and datasets.
Improved Regret of Linear Ensemble Sampling
·1286 words·7 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
š¢ Seoul National University
Linear ensemble sampling achieves a state-of-the-art regret bound of Ć(dĀ³/Ā²āT) with a logarithmic ensemble size, closing the theory-practice gap in linear bandit algorithms.
Gradient-free Decoder Inversion in Latent Diffusion Models
·2408 words·12 mins·
loading
·
loading
Computer Vision
Image Generation
š¢ Seoul National University
This paper introduces a novel gradient-free decoder inversion method for latent diffusion models, improving efficiency and memory usage compared to existing gradient-based methods. The method is theo…
Gated Inference Network: Inference and Learning State-Space Models
·3839 words·19 mins·
loading
·
loading
Machine Learning
Representation Learning
š¢ Seoul National University
GIN, a novel approximate Bayesian inference algorithm, efficiently handles nonlinear state-space models with high-dimensional, noisy observations by disentangling observation and dynamics. Achieving l…
FIFO-Diffusion: Generating Infinite Videos from Text without Training
·3112 words·15 mins·
loading
·
loading
Computer Vision
Video Understanding
š¢ Seoul National University
FIFO-Diffusion generates infinitely long, high-quality videos from text prompts using a pretrained model, solving the challenge of long video generation without retraining.
FedAvP: Augment Local Data via Shared Policy in Federated Learning
·3211 words·16 mins·
loading
·
loading
Machine Learning
Federated Learning
š¢ Seoul National University
FedAvP enhances federated learning’s privacy by sharing only augmentation policies, improving performance in diverse settings.
DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation
·2987 words·15 mins·
loading
·
loading
Natural Language Processing
Large Language Models
š¢ Seoul National University
DropBP: Accelerate LLM fine-tuning by 44% while preserving accuracy!
Are Self-Attentions Effective for Time Series Forecasting?
·3575 words·17 mins·
loading
·
loading
Machine Learning
Deep Learning
š¢ Seoul National University
Cross-Attention-only Time Series Transformer (CATS) outperforms existing models by removing self-attention, improving long-term forecasting accuracy, and reducing computational cost.
An Adaptive Approach for Infinitely Many-armed Bandits under Generalized Rotting Constraints
·1703 words·8 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
š¢ Seoul National University
Adaptive algorithm achieves tight regret bounds for infinitely many-armed bandits under generalized rotting constraints, addressing the challenge of decreasing rewards over time.
Adversarial Environment Design via Regret-Guided Diffusion Models
·2707 words·13 mins·
loading
·
loading
Reinforcement Learning
š¢ Seoul National University
Regret-Guided Diffusion Models enhance unsupervised environment design by generating challenging, diverse training environments that improve agent robustness and zero-shot generalization.