Skip to main content

Paper Reviews by AI

2025

SANA-Sprint: One-Step Diffusion with Continuous-Time Consistency Distillation
·3137 words·15 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision Image Generation 🏢 NVIDIA
SANA-Sprint: An efficient diffusion model for ultra-fast text-to-image generation with continuous-time consistency distillation, achieving state-of-the-art performance in speed and quality.
Reangle-A-Video: 4D Video Generation as Video-to-Video Translation
·2533 words·12 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision Video Understanding 🏢 KAIST AI
Reangle-A-Video generates synchronized multi-view videos from a single video via video-to-video translation, surpassing existing methods without specialized 4D training.
Quantization for OpenAI's Whisper Models: A Comparative Analysis
·1308 words·7 mins· loading · loading
AI Generated 🤗 Daily Papers Speech and Audio Speech Recognition 🏢 Independent Researcher
Quantization optimizes OpenAI’s Whisper models, balancing model size, speed, and accuracy for diverse applications.
PerCoV2: Improved Ultra-Low Bit-Rate Perceptual Image Compression with Implicit Hierarchical Masked Image Modeling
·2966 words·14 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision Image Generation 🏢 Technical University of Munich
PerCoV2: Open ultra-low bit-rate perceptual image compression using implicit hierarchical masked image modeling, built on Stable Diffusion 3 for bandwidth-constrained applications.
Open-Sora 2.0: Training a Commercial-Level Video Generation Model in $200k
·2200 words·11 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision Video Understanding 🏢 HPC-AI Tech
Open-Sora 2.0: A commercial-level video generation model trained for only $200k, achieving comparable results to state-of-the-art models.
On the Limitations of Vision-Language Models in Understanding Image Transforms
·2360 words·12 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision Vision-Language Models 🏢 Cohere for AI Community
VLMs struggle with basic image transforms! This paper reveals their limitations in understanding image-level changes, impacting downstream tasks.
Neighboring Autoregressive Modeling for Efficient Visual Generation
·3102 words·15 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision Image Generation 🏢 Zhejiang University, China
NAR: Neighboring Autoregressive Modeling for efficient visual generation by locality-preserved, parallel decoding.
Group-robust Machine Unlearning
·7203 words·34 mins· loading · loading
AI Generated 🤗 Daily Papers AI Theory Robustness 🏢 University of Trento
Group-robust machine unlearning via MIU reduces perf. degradation in dominant groups after unlearning, preserving model robustness without compromising accuracy.
Florenz: Scaling Laws for Systematic Generalization in Vision-Language Models
·6018 words·29 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Fraunhofer IAIS
Florenz: Scaling laws for systematic generalization via monolingual vision-language models
Error Analyses of Auto-Regressive Video Diffusion Models: A Unified Framework
·3325 words·16 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision Video Understanding 🏢 Sea AI Lab
Unified framework reveals and mitigates error sources in autoregressive video diffusion models.
Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption
·3100 words·15 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Shanghai Academy of Artificial Intelligence for Science
Cockatiel: Ensembling synthetic & human-preferred training boosts detailed video captioning, setting new SOTA on VDCSCORE.
Uni$ extbf{F}^2$ace: Fine-grained Face Understanding and Generation with Unified Multimodal Models
·2980 words·14 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Multimodal Generation 🏢 Peking University
UniFace: a novel UMM tailored for fine-grained face understanding and generation.
Tuning-Free Multi-Event Long Video Generation via Synchronized Coupled Sampling
·3192 words·15 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision Video Understanding 🏢 KAIST
SynCoS: Synchronized sampling generates high-quality & coherent long videos from text, without extra training!
SegAgent: Exploring Pixel Understanding Capabilities in MLLMs by Imitating Human Annotator Trajectories
·2632 words·13 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 Zhejiang University, China
SegAgent: Improves MLLMs’ pixel understanding by mimicking human annotation, enabling mask refinement without altering output space.
Referring to Any Person
·3096 words·15 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Vision-Language Models 🏢 International Digital Economy Academy (IDEA)
Introducing HumanRef, a new dataset & RexSeek, a multimodal LLM, to improve human-centric referring tasks by addressing limitations of existing methods.
QuoTA: Query-oriented Token Assignment via CoT Query Decouple for Long Video Comprehension
·3039 words·15 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision Video Understanding 🏢 Xiamen University
QuoTA: Task-aware token assignment boosts long video comprehension in LVLMs via query-decoupled processing, without extra training!
Perplexity Trap: PLM-Based Retrievers Overrate Low Perplexity Documents
·3678 words·18 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Information Extraction 🏢 Renmin University of China
PLM retrievers overrate low-perplexity docs, causing source bias. This paper reveals the causal effect & offers a fix!
Open-World Skill Discovery from Unsegmented Demonstrations
·3148 words·15 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision Video Understanding 🏢 Peking University
SBD: Self-supervised skill discovery from unsegmented videos!
OmniMamba: Efficient and Unified Multimodal Understanding and Generation via State Space Models
·2951 words·14 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Multimodal Understanding 🏢 Huazhong University of Science & Technology
OmniMamba: Efficient multimodal understanding and generation via SSMs, trained on 2M image-text pairs.
NullFace: Training-Free Localized Face Anonymization
·2015 words·10 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision Face Recognition 🏢 University of Trento
NullFace: A training-free face anonymization method preserving non-identity attributes with localized control using latent diffusion inversion.