Multimodal Learning
STEVE: AStep Verification Pipeline for Computer-use Agent Training
·3895 words·19 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Vision-Language Models
🏢 CUHK
STEVE: Step-verifying computer-use agent training.
PEBench: A Fictitious Dataset to Benchmark Machine Unlearning for Multimodal Large Language Models
·4158 words·20 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Multimodal Understanding
🏢 HIT
PEBench: A new benchmark for machine unlearning in multimodal language models, enhancing secure multimodal model development.
Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey
·3237 words·16 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Multimodal Reasoning
🏢 NUS
A comprehensive survey of multimodal chain-of-thought (MCoT) reasoning, bridging the gap in existing literature and fostering innovation towards multimodal AGI.
MPBench: A Comprehensive Multimodal Reasoning Benchmark for Process Errors Identification
·2497 words·12 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Multimodal Reasoning
🏢 HIT
MPBench: Multimodal benchmark to identify errors in reasoning processes.
CapArena: Benchmarking and Analyzing Detailed Image Captioning in the LLM Era
·4997 words·24 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Vision-Language Models
🏢 National Key Laboratory for Novel Software Technology, Nanjing University
CapArena: Detailed image caption benchmark in the LLM era, revealing metric biases and advancing automated evaluation.
Being-0: A Humanoid Robotic Agent with Vision-Language Models and Modular Skills
·4598 words·22 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Embodied AI
🏢 Peking University
Being-0: A humanoid robot agent achieves complex tasks by integrating a vision-language model with modular skills, enhancing efficiency and real-time performance.
Basic Category Usage in Vision Language Models
·1339 words·7 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Vision-Language Models
🏢 Tennessee Tech University
VLMs exhibit human-like object categorization, favoring basic levels and mirroring biological/expertise nuances, suggesting learned cognitive behaviors.
Hyperbolic Safety-Aware Vision-Language Models
·3785 words·18 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Vision-Language Models
🏢 University of Modena and Reggio Emilia, Italy
HySAC: A hyperbolic framework for safety-aware vision-language models, improving content moderation and interpretability.
V-STaR: Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning
·222 words·2 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Vision-Language Models
🏢 Queen Mary University of London
V-STaR: A new benchmark to evaluate Video-LLMs in video spatio-temporal reasoning, revealing gaps in current models’ understanding.
World Modeling Makes a Better Planner: Dual Preference Optimization for Embodied Task Planning
·3847 words·19 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Embodied AI
🏢 Fudan University
D2PO: World modeling enhances embodied task planning by jointly optimizing state prediction and action selection, leading to more efficient execution.
VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search
·2529 words·12 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Vision-Language Models
🏢 University of Waterloo
VisualWebInstruct: Scales up multimodal instruction data via web search, enhancing VLMs’ reasoning for complex tasks.
UniGoal: Towards Universal Zero-shot Goal-oriented Navigation
·2233 words·11 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Embodied AI
🏢 Tsinghua University
UniGoal: A novel framework for universal zero-shot goal-oriented navigation, outperforming task-specific methods with a unified approach.
Long-Video Audio Synthesis with Multi-Agent Collaboration
·2152 words·11 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Audio-Visual Learning
🏢 Hong Kong University of Science and Technology
LVAS-Agent: Multi-agent system conquers long-video audio synthesis with collaborative dubbing, script, design, & more!
Large-scale Pre-training for Grounded Video Caption Generation
·2703 words·13 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Vision-Language Models
🏢 Czech Institute of Informatics, Robotics and Cybernetics
GROVE: Pre-training on large-scale data for grounded video caption generation.
GroundingSuite: Measuring Complex Multi-Granular Pixel Grounding
·2562 words·13 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Vision-Language Models
🏢 Huazhong University of Science & Technology
GroundingSuite: A new benchmark that measures complex multi-granular pixel grounding to overcome current dataset limitations and push forward vision-language understanding.
From TOWER to SPIRE: Adding the Speech Modality to a Text-Only LLM
·1953 words·10 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Vision-Language Models
🏢 Paris-Saclay University
SPIRE: Adds speech to text-only LLMs, maintaining text performance via discretized speech and continued pre-training.
FlowTok: Flowing Seamlessly Across Text and Image Tokens
·2984 words·15 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Multimodal Generation
🏢 ByteDance Seed
FlowTok: Seamlessly flows across text and image tokens!
Florenz: Scaling Laws for Systematic Generalization in Vision-Language Models
·6018 words·29 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Vision-Language Models
🏢 Fraunhofer IAIS
Florenz: Scaling laws for systematic generalization via monolingual vision-language models
Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption
·3100 words·15 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Vision-Language Models
🏢 Shanghai Academy of Artificial Intelligence for Science
Cockatiel: Ensembling synthetic & human-preferred training boosts detailed video captioning, setting new SOTA on VDCSCORE.
Uni$ extbf{F}^2$ace: Fine-grained Face Understanding and Generation with Unified Multimodal Models
·2980 words·14 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Multimodal Learning
Multimodal Generation
🏢 Peking University
UniFace: a novel UMM tailored for fine-grained face understanding and generation.