Skip to main content

Posters

2024

CriticEval: Evaluating Large-scale Language Model as Critic
·4755 words·23 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Beijing Institute of Technology
CRITICEVAL: A new benchmark reliably evaluates LLMs’ ability to identify and correct flaws in their responses, addressing limitations of existing methods by offering comprehensive and reliable evaluat…
Credit Attribution and Stable Compression
·299 words·2 mins· loading · loading
AI Theory Privacy 🏢 Tel Aviv University
New definitions of differential privacy enable machine learning algorithms to credit sources appropriately, balancing data utility and copyright compliance.
Credal Learning Theory
·2051 words·10 mins· loading · loading
AI Generated AI Theory Generalization 🏢 University of Manchester
Credal Learning Theory uses convex sets of probabilities to model data distribution variability, providing theoretical risk bounds for machine learning models in dynamic environments.
Credal Deep Ensembles for Uncertainty Quantification
·3555 words·17 mins· loading · loading
Machine Learning Deep Learning 🏢 KU Leuven
Credal Deep Ensembles (CreDEs) improve uncertainty quantification in deep learning by predicting probability intervals, enhancing accuracy and calibration, particularly for out-of-distribution data.
CRAYM: Neural Field Optimization via Camera RAY Matching
·2649 words·13 mins· loading · loading
Computer Vision 3D Vision 🏢 Shenzhen University
CRAYM: Neural field optimization via camera RAY matching enhances 3D reconstruction by using camera rays, not pixels, improving both novel view synthesis and geometry.
Crafting Interpretable Embeddings for Language Neuroscience by Asking LLMs Questions
·1981 words·10 mins· loading · loading
Natural Language Processing Large Language Models 🏢 UC Berkeley
LLM-based text embeddings are powerful but lack interpretability. This paper introduces QA-Emb, a novel method that uses an LLM to answer yes/no questions about a text, thereby producing an interpreta…
CoVoMix: Advancing Zero-Shot Speech Generation for Human-like Multi-talker Conversations
·2583 words·13 mins· loading · loading
Natural Language Processing Dialogue Systems 🏢 Shanghai Jiao Tong University
CoVoMix: Generating human-like, multi-speaker conversations with zero-shot speech synthesis.
COVE: Unleashing the Diffusion Feature Correspondence for Consistent Video Editing
·2236 words·11 mins· loading · loading
Computer Vision Video Understanding 🏢 Tsinghua University
COVE: Consistent high-quality video editing achieved by leveraging diffusion feature correspondence for temporal consistency.
Covariate Shift Corrected Conditional Randomization Test
·2259 words·11 mins· loading · loading
AI Generated AI Theory Causality 🏢 Harvard University
A new Covariate Shift Corrected Pearson Chi-squared Conditional Randomization (csPCR) test accurately assesses conditional independence even when data distributions vary between source and target popu…
Coupled Mamba: Enhanced Multimodal Fusion with Coupled State Space Model
·2541 words·12 mins· loading · loading
AI Generated Multimodal Learning Vision-Language Models 🏢 Huazhong University of Science and Technology
Coupled Mamba: Enhanced multi-modal fusion via coupled state space model boosts accuracy and efficiency.
CountGD: Multi-Modal Open-World Counting
·2520 words·12 mins· loading · loading
Computer Vision Object Detection 🏢 University of Oxford
COUNTGD: A new multi-modal model counts objects in images using text or visual examples, significantly improving open-world counting accuracy.
Counterfactual Fairness by Combining Factual and Counterfactual Predictions
·2056 words·10 mins· loading · loading
AI Theory Fairness 🏢 Purdue University
This paper proposes a novel method to achieve optimal counterfactual fairness in machine learning models while minimizing predictive performance degradation.
Counter-Current Learning: A Biologically Plausible Dual Network Approach for Deep Learning
·1908 words·9 mins· loading · loading
Machine Learning Deep Learning 🏢 Cornell University
Biologically inspired Counter-Current Learning (CCL) uses dual networks for deep learning, offering comparable performance to other biologically plausible algorithms while enhancing biological realism…
CoSW: Conditional Sample Weighting for Smoke Segmentation with Label Noise
·2129 words·10 mins· loading · loading
Computer Vision Image Segmentation 🏢 East China University of Science and Technology
CoSW: a novel conditional sample weighting method for robust smoke segmentation, achieves state-of-the-art results by handling inconsistent noisy labels through a multi-prototype framework.
Cost-efficient Knowledge-based Question Answering with Large Language Models
·1874 words·9 mins· loading · loading
AI Generated Natural Language Processing Question Answering 🏢 Hong Kong Polytechnic University
Coke: A cost-efficient KBQA strategy using LLMs and KGMs, maximizing accuracy while minimizing GPT-4 fees by up to 20.89%
Cost-aware Bayesian Optimization via the Pandora's Box Gittins Index
·2385 words·12 mins· loading · loading
AI Theory Optimization 🏢 Cornell University
Cost-aware Bayesian optimization gets a boost with the Pandora’s Box Gittins Index, a novel acquisition function that efficiently balances exploration and exploitation while considering evaluation cos…
COSMIC: Compress Satellite Image Efficiently via Diffusion Compensation
·3381 words·16 mins· loading · loading
Computer Vision Image Compression 🏢 Tsinghua University
COSMIC efficiently compresses satellite images via a lightweight encoder and diffusion compensation, enabling practical onboard processing and high compression ratios.
CosAE: Learnable Fourier Series for Image Restoration
·2867 words·14 mins· loading · loading
Computer Vision Image Restoration 🏢 NVIDIA Research
CosAE: a novel autoencoder using learnable Fourier series achieves state-of-the-art image restoration by encoding frequency coefficients in its narrow bottleneck, preserving fine details even with ext…
Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent Misspecification
·375 words·2 mins· loading · loading
AI Theory Robustness 🏢 University of Virginia
This paper presents novel algorithms for linear bandits that are robust to corrupted rewards, achieving minimax optimality and optimal scaling for gap-dependent misspecification, extending to reinforc…
CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning
·2973 words·14 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏢 King Abdullah University of Science and Technology
CorDA: Context-oriented weight decomposition enhances large language model fine-tuning by integrating task context, improving efficiency and mitigating catastrophic forgetting.