🏢 Wuhan University
What If the Input is Expanded in OOD Detection?
·3779 words·18 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Wuhan University
Boost OOD detection accuracy by averaging model confidence scores from original and corrupted inputs!
Toward Real Ultra Image Segmentation: Leveraging Surrounding Context to Cultivate General Segmentation Model
·2381 words·12 mins·
loading
·
loading
Computer Vision
Image Segmentation
🏢 Wuhan University
SGNet cultivates general segmentation models for ultra images by integrating surrounding context, achieving significant performance improvements across various datasets.
The Reliability of OKRidge Method in Solving Sparse Ridge Regression Problems
·2340 words·11 mins·
loading
·
loading
AI Theory
Optimization
🏢 Wuhan University
OKRidge’s reliability for solving sparse ridge regression problems is rigorously proven through theoretical error analysis, enhancing its applicability in machine learning.
Text-DiFuse: An Interactive Multi-Modal Image Fusion Framework based on Text-modulated Diffusion Model
·2167 words·11 mins·
loading
·
loading
Multimodal Learning
Vision-Language Models
🏢 Wuhan University
Text-DiFuse: A novel interactive multi-modal image fusion framework leverages text-modulated diffusion models for superior performance in complex scenarios.
ROBIN: Robust and Invisible Watermarks for Diffusion Models with Adversarial Optimization
·2551 words·12 mins·
loading
·
loading
Computer Vision
Image Generation
🏢 Wuhan University
ROBIN: A novel watermarking method for diffusion models that actively conceals robust watermarks using adversarial optimization, enabling strong, imperceptible, and verifiable image authentication.
Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models
·2065 words·10 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Wuhan University
Reference Trustable Decoding (RTD) revolutionizes large language model adaptation by offering a training-free method, enabling efficient and cost-effective task adaptation without parameter adjustment…
Prospective Representation Learning for Non-Exemplar Class-Incremental Learning
·2489 words·12 mins·
loading
·
loading
Machine Learning
Few-Shot Learning
🏢 Wuhan University
Prospective Representation Learning (PRL) revolutionizes non-exemplar class-incremental learning by proactively reserving embedding space for new classes and minimizing the shock of new data on previo…
Parameter Disparities Dissection for Backdoor Defense in Heterogeneous Federated Learning
·1504 words·8 mins·
loading
·
loading
Machine Learning
Federated Learning
🏢 Wuhan University
FDCR defends against backdoor attacks in heterogeneous federated learning by identifying malicious clients via Fisher Information-based parameter importance discrepancies and rescaling crucial paramet…
Non-asymptotic Approximation Error Bounds of Parameterized Quantum Circuits
·1430 words·7 mins·
loading
·
loading
🏢 Wuhan University
New non-asymptotic approximation error bounds show that parameterized quantum circuits can efficiently approximate complex functions, potentially surpassing classical neural networks.
InfoRM: Mitigating Reward Hacking in RLHF via Information-Theoretic Reward Modeling
·5629 words·27 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Wuhan University
InfoRM tackles reward hacking in RLHF using an information-theoretic approach, enhancing generalizability and enabling overoptimization detection.
FedSSP: Federated Graph Learning with Spectral Knowledge and Personalized Preference
·1387 words·7 mins·
loading
·
loading
Machine Learning
Federated Learning
🏢 Wuhan University
FedSSP tackles personalized federated graph learning challenges by sharing generic spectral knowledge and incorporating personalized preferences, achieving superior performance in cross-domain scenari…
Decomposed Prompt Decision Transformer for Efficient Unseen Task Generalization
·2344 words·12 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Wuhan University
Decomposed Prompt Decision Transformer (DPDT) efficiently learns prompts for unseen tasks using a two-stage paradigm, achieving superior performance in multi-task offline reinforcement learning.
A Boosting-Type Convergence Result for AdaBoost.MH with Factorized Multi-Class Classifiers
·358 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 Wuhan University
Solved a long-standing open problem: Factorized ADABOOST.MH now has a proven convergence rate!