Skip to main content

🏒 KAIST

GTA: Generative Trajectory Augmentation with Guidance for Offline Reinforcement Learning
·3982 words·19 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 KAIST
Generative Trajectory Augmentation (GTA) significantly boosts offline reinforcement learning by generating high-reward trajectories using a conditional diffusion model, enhancing algorithm performance…
GrounDiT: Grounding Diffusion Transformers via Noisy Patch Transplantation
·2589 words·13 mins· loading · loading
Multimodal Learning Vision-Language Models 🏒 KAIST
GrounDiT: Training-free spatial grounding for text-to-image generation using Diffusion Transformers and a novel noisy patch transplantation technique for precise object placement.
Generalizable Person Re-identification via Balancing Alignment and Uniformity
·3010 words·15 mins· loading · loading
AI Generated Computer Vision Face Recognition 🏒 KAIST
Balancing Alignment and Uniformity (BAU) framework improves generalizable person re-identification by mitigating the polarized effects of data augmentation, achieving state-of-the-art performance.
Exploiting Representation Curvature for Boundary Detection in Time Series
·2189 words·11 mins· loading · loading
Machine Learning Self-Supervised Learning 🏒 KAIST
RECURVE: A novel boundary detection method leveraging representation trajectory curvature, surpassing state-of-the-art techniques by accommodating both gradual and abrupt changes in time series.
Exactly Minimax-Optimal Locally Differentially Private Sampling
·1615 words·8 mins· loading · loading
AI Theory Privacy 🏒 KAIST
This paper provides the first exact minimax-optimal mechanisms for locally differentially private sampling, applicable across all f-divergences.
EPIC: Effective Prompting for Imbalanced-Class Data Synthesis in Tabular Data Classification via Large Language Models
·5652 words·27 mins· loading · loading
AI Generated Machine Learning Few-Shot Learning 🏒 KAIST
EPIC: Effective prompting makes LLMs generate high-quality synthetic tabular data, significantly boosting imbalanced-class classification.
Effective Rank Analysis and Regularization for Enhanced 3D Gaussian Splatting
·2803 words·14 mins· loading · loading
AI Generated Computer Vision 3D Vision 🏒 KAIST
Effective rank regularization enhances 3D Gaussian splatting, resolving needle-like artifacts and improving 3D model quality.
Do's and Don'ts: Learning Desirable Skills with Instruction Videos
·2781 words·14 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏒 KAIST
DoDont, a novel algorithm, uses instruction videos to guide unsupervised skill discovery, effectively learning desirable behaviors while avoiding undesirable ones in complex continuous control tasks.
Direct Consistency Optimization for Robust Customization of Text-to-Image Diffusion models
·3011 words·15 mins· loading · loading
Computer Vision Image Generation 🏒 KAIST
Boosting personalized image generation! Direct Consistency Optimization (DCO) fine-tunes text-to-image models, ensuring subject consistency and prompt fidelity, even when merging separately customized…
Differential Privacy in Scalable General Kernel Learning via $K$-means Nystr{"o}m Random Features
·1468 words·7 mins· loading · loading
AI Generated AI Theory Privacy 🏒 KAIST
Differentially private scalable kernel learning is achieved via a novel DP K-means NystrΓΆm method, enabling efficient and accurate model training for general kernels while safeguarding privacy.
Adaptive $Q$-Aid for Conditional Supervised Learning in Offline Reinforcement Learning
·3193 words·15 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 KAIST
Q-Aided Conditional Supervised Learning (QCS) effectively combines the stability of return-conditioned supervised learning with the stitching ability of Q-functions, achieving superior offline reinfor…
A Unified Confidence Sequence for Generalized Linear Models, with Applications to Bandits
·1965 words·10 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏒 KAIST
A unified confidence sequence (CS) construction for generalized linear models (GLMs) achieves state-of-the-art regret bounds for contextual bandits, notably a poly(S)-free regret for logistic bandits.
(FL)$^2$: Overcoming Few Labels in Federated Semi-Supervised Learning
·2049 words·10 mins· loading · loading
AI Generated Machine Learning Federated Learning 🏒 KAIST
Federated Semi-Supervised Learning (FSSL) struggles with limited labeled data. (FL)Β² bridges this gap using adaptive thresholding, sharpness-aware consistency regularization, and learning status-awar…