Skip to main content

🏢 Arizona State University

TripletCLIP: Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives
·3187 words·15 mins· loading · loading
Multimodal Learning Vision-Language Models 🏢 Arizona State University
TripletCLIP boosts CLIP’s compositional reasoning by cleverly generating synthetic hard negative image-text pairs, achieving over 9% absolute improvement on SugarCrepe.
Enhancing Robustness of Last Layer Two-Stage Fair Model Corrections
·2233 words·11 mins· loading · loading
AI Theory Fairness 🏢 Arizona State University
Boosting fair machine learning’s robustness against noisy labels, this work introduces a novel label-spreading method, achieving state-of-the-art worst-group accuracy.
Chain of Thoughtlessness? An Analysis of CoT in Planning
·2944 words·14 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Arizona State University
Chain of Thought prompting in LLMs offers limited generalizability, providing performance gains only when prompts are highly specific to problem types; highlighting a critical trade-off between perfor…
Belief-State Query Policies for User-Aligned Planning under Partial Observability
·1669 words·8 mins· loading · loading
AI Applications Robotics 🏢 Arizona State University
This paper introduces Belief-State Query (BSQ) constraints for user-aligned planning in partially observable settings, providing algorithms with guaranteed user alignment and computational feasibility…