Skip to main content

Vision-Language Models

From Head to Tail: Towards Balanced Representation in Large Vision-Language Models through Adaptive Data Calibration
·5931 words·28 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Fudan University
ADR balances visual-language models by adaptively calibrating long-tail data, boosting LLaVA 1.5 by 4.36% without increasing training data volume.
DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding
·2841 words·14 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Tsinghua University
DeepPerception enhances MLLMs with cognitive visual perception, achieving superior grounding through knowledge integration & reasoning.
STEVE: AStep Verification Pipeline for Computer-use Agent Training
·3895 words·19 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 CUHK
STEVE: Step-verifying computer-use agent training.
CapArena: Benchmarking and Analyzing Detailed Image Captioning in the LLM Era
·4997 words·24 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 National Key Laboratory for Novel Software Technology, Nanjing University
CapArena: Detailed image caption benchmark in the LLM era, revealing metric biases and advancing automated evaluation.
Basic Category Usage in Vision Language Models
·1339 words·7 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Tennessee Tech University
VLMs exhibit human-like object categorization, favoring basic levels and mirroring biological/expertise nuances, suggesting learned cognitive behaviors.
Hyperbolic Safety-Aware Vision-Language Models
·3785 words·18 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 University of Modena and Reggio Emilia, Italy
HySAC: A hyperbolic framework for safety-aware vision-language models, improving content moderation and interpretability.
V-STaR: Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning
·222 words·2 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Queen Mary University of London
V-STaR: A new benchmark to evaluate Video-LLMs in video spatio-temporal reasoning, revealing gaps in current models’ understanding.
VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search
·2529 words·12 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 University of Waterloo
VisualWebInstruct: Scales up multimodal instruction data via web search, enhancing VLMs’ reasoning for complex tasks.
Large-scale Pre-training for Grounded Video Caption Generation
·2703 words·13 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Czech Institute of Informatics, Robotics and Cybernetics
GROVE: Pre-training on large-scale data for grounded video caption generation.
GroundingSuite: Measuring Complex Multi-Granular Pixel Grounding
·2562 words·13 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Huazhong University of Science & Technology
GroundingSuite: A new benchmark that measures complex multi-granular pixel grounding to overcome current dataset limitations and push forward vision-language understanding.
From TOWER to SPIRE: Adding the Speech Modality to a Text-Only LLM
·1953 words·10 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Paris-Saclay University
SPIRE: Adds speech to text-only LLMs, maintaining text performance via discretized speech and continued pre-training.
On the Limitations of Vision-Language Models in Understanding Image Transforms
·2360 words·12 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision Vision-Language Models 🏢 Cohere for AI Community
VLMs struggle with basic image transforms! This paper reveals their limitations in understanding image-level changes, impacting downstream tasks.
Florenz: Scaling Laws for Systematic Generalization in Vision-Language Models
·6018 words·29 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Fraunhofer IAIS
Florenz: Scaling laws for systematic generalization via monolingual vision-language models
Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption
·3100 words·15 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Shanghai Academy of Artificial Intelligence for Science
Cockatiel: Ensembling synthetic & human-preferred training boosts detailed video captioning, setting new SOTA on VDCSCORE.
SegAgent: Exploring Pixel Understanding Capabilities in MLLMs by Imitating Human Annotator Trajectories
·2632 words·13 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Zhejiang University, China
SegAgent: Improves MLLMs’ pixel understanding by mimicking human annotation, enabling mask refinement without altering output space.
Referring to Any Person
·3096 words·15 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 International Digital Economy Academy (IDEA)
Introducing HumanRef, a new dataset & RexSeek, a multimodal LLM, to improve human-centric referring tasks by addressing limitations of existing methods.
GTR: Guided Thought Reinforcement Prevents Thought Collapse in RL-based VLM Agent Training
·2477 words·12 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Tsinghua University
GTR: Prevents thought collapse in RL-based VLM agents by process guidance, enhancing performance in complex visual reasoning tasks.
Video Action Differencing
·3793 words·18 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Stanford
VidDiff: Identify subtle action differences in videos for coaching and skill learning.
Should VLMs be Pre-trained with Image Data?
·3469 words·17 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Toyota Research Institute
Image data during pre-training can boost Vision-Language Model (VLM) performance, especially when introduced later in the process.
VisualSimpleQA: A Benchmark for Decoupled Evaluation of Large Vision-Language Models in Fact-Seeking Question Answering
·2597 words·13 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Zhongguancun Laboratory
VisualSimpleQA: A new benchmark for fine-grained evaluation of visual and linguistic modules in fact-seeking LVLMs.