Skip to main content

Posters

2024

Coherence-free Entrywise Estimation of Eigenvectors in Low-rank Signal-plus-noise Matrix Models
·1535 words·8 mins· loading · loading
AI Theory Optimization 🏢 University of Wisconsin-Madison
New method for eigenvector estimation achieves optimal rates without coherence dependence, improving low-rank matrix denoising and related tasks.
CoFie: Learning Compact Neural Surface Representations with Coordinate Fields
·2625 words·13 mins· loading · loading
AI Generated Computer Vision 3D Vision 🏢 University of Texas at Austin
CoFie: A novel local geometry-aware neural surface representation dramatically improves accuracy and efficiency in 3D shape modeling by using coordinate fields to compress local shape information.
Coevolving with the Other You: Fine-Tuning LLM with Sequential Cooperative Multi-Agent Reinforcement Learning
·2454 words·12 mins· loading · loading
Natural Language Processing Large Language Models 🏢 School of Artificial Intelligence, University of Chinese Academy of Sciences
CORY: a novel multi-agent RL framework boosts LLM fine-tuning!
CodeRosetta: Pushing the Boundaries of Unsupervised Code Translation for Parallel Programming
·5423 words·26 mins· loading · loading
AI Generated Natural Language Processing Machine Translation 🏢 Iowa State University
Code Rosetta pushes the boundaries of unsupervised code translation by introducing the first encoder-decoder model that efficiently translates between programming languages and their parallel HPC exte…
Coded Computing for Resilient Distributed Computing: A Learning-Theoretic Framework
·2345 words·12 mins· loading · loading
AI Generated AI Theory Optimization 🏢 University of Minnesota
LeTCC: A novel learning-theoretic framework for resilient distributed computing, achieving faster convergence and higher accuracy than existing methods by integrating learning theory principles with c…
CODE: Contrasting Self-generated Description to Combat Hallucination in Large Multi-modal Models
·3116 words·15 mins· loading · loading
Multimodal Learning Vision-Language Models 🏢 Integrated Vision and Language Lab, KAIST, South Korea
CODE combats LMM hallucinations by contrasting self-generated descriptions with visual content during decoding, enhancing response accuracy without retraining.
Code Repair with LLMs gives an Exploration-Exploitation Tradeoff
·3695 words·18 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Cornell University
New program synthesis method, REX, leverages Thompson Sampling to balance exploration and exploitation in iterative LLM code refinement, solving more problems with fewer model calls.
CODA: A Correlation-Oriented Disentanglement and Augmentation Modeling Scheme for Better Resisting Subpopulation Shifts
·1907 words·9 mins· loading · loading
Machine Learning Deep Learning 🏢 City University of Hong Kong
CODA: A novel modeling scheme tackles subpopulation shifts in machine learning by disentangling spurious correlations, augmenting data strategically, and using reweighted consistency loss for improved…
CoBo: Collaborative Learning via Bilevel Optimization
·1628 words·8 mins· loading · loading
Machine Learning Federated Learning 🏢 EPFL
CoBo: A novel bilevel optimization algorithm for collaborative learning surpasses existing methods by efficiently selecting helpful clients, resulting in superior performance and scalability.
Coarse-to-Fine Concept Bottleneck Models
·2840 words·14 mins· loading · loading
Computer Vision Image Classification 🏢 Inria
Hierarchical concept bottleneck models boost interpretability and accuracy in visual classification by uncovering both high-level and low-level concepts.
CNCA: Toward Customizable and Natural Generation of Adversarial Camouflage for Vehicle Detectors
·2085 words·10 mins· loading · loading
Computer Vision Object Detection 🏢 Harbin Institute of Technology, Shenzhen
Researchers developed CNCA, a novel method that generates realistic and customizable adversarial camouflage for vehicle detectors by leveraging a pre-trained diffusion model, surpassing existing metho…
Clustering with Non-adaptive Subset Queries
·407 words·2 mins· loading · loading
Machine Learning Unsupervised Learning 🏢 UC San Diego
This paper introduces novel non-adaptive algorithms for clustering using subset queries, achieving near-linear query complexity and improving upon existing limitations of pairwise query methods.
Clustering then Propagation: Select Better Anchors for Knowledge Graph Embedding
·1993 words·10 mins· loading · loading
Machine Learning Knowledge Graph Embedding 🏢 National University of Defense Technology
RecPiece: Relational Clustering for Better Knowledge Graph Embedding Anchors
Clustering in Causal Attention Masking
·1455 words·7 mins· loading · loading
AI Theory Causality 🏢 MIT
Researchers strengthen understanding of transformer self-attention by proving asymptotic convergence to single clusters under causal masking, linking it to the Rényi parking problem.
Cluster-Learngene: Inheriting Adaptive Clusters for Vision Transformers
·3088 words·15 mins· loading · loading
AI Generated Computer Vision Vision-Language Models 🏢 School of Computer Science and Engineering, Southeast University
Cluster-Learngene efficiently initializes elastic-scale Vision Transformers by adaptively clustering and inheriting key modules from a large ancestry model, saving resources and boosting downstream ta…
CLUES: Collaborative Private-domain High-quality Data Selection for LLMs via Training Dynamics
·2368 words·12 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏢 University of Cambridge
CLUES: Collaborative learning selects high-quality private data for LLM fine-tuning via training dynamics, significantly boosting performance in diverse domains.
Cloud Object Detector Adaptation by Integrating Different Source Knowledge
·2997 words·15 mins· loading · loading
Computer Vision Object Detection 🏢 University of Electronic Science and Technology of China
COIN: A novel method for Cloud Object Detector Adaptation that integrates knowledge from cloud models and CLIP to train highly accurate target detectors, achieving state-of-the-art performance.
Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation
·2760 words·13 mins· loading · loading
AI Applications Robotics 🏢 Tsinghua University
CLOVER: A closed-loop visuomotor framework using generative visual plans & feedback mechanisms achieves state-of-the-art results in long-horizon robotic manipulation tasks.
CLIPCEIL: Domain Generalization through CLIP via Channel rEfinement and Image-text aLignment
·3674 words·18 mins· loading · loading
AI Generated Multimodal Learning Vision-Language Models 🏢 Brookhaven National Laboratory
CLIPCEIL enhances CLIP’s domain generalization by refining feature channels for domain invariance and aligning image-text embeddings, achieving state-of-the-art performance.
CLIP in Mirror: Disentangling text from visual images through reflection
·4284 words·21 mins· loading · loading
AI Generated Multimodal Learning Vision-Language Models 🏢 Beihang University
MirrorCLIP disentangles text from images in CLIP using mirror reflection differences, enhancing robustness against text-visual image confusion.