Skip to main content

🏢 ETH Zurich

Relightable Full-Body Gaussian Codec Avatars
·3832 words·18 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision 3D Vision 🏢 ETH Zurich
Relightable Full-Body Gaussian Codec Avatars: Realistic, animatable full-body avatars are now possible using learned radiance transfer and efficient 3D Gaussian splatting.
Reasoning Language Models: A Blueprint
·3562 words·17 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 ETH Zurich
Democratizing advanced reasoning in AI, this blueprint introduces a modular framework for building Reasoning Language Models (RLMs), simplifying development and enhancing accessibility.
GSTAR: Gaussian Surface Tracking and Reconstruction
·2047 words·10 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision 3D Vision 🏢 ETH Zurich
GSTAR: A novel method achieving photorealistic rendering, accurate reconstruction, and reliable 3D tracking of dynamic scenes with changing topology, even handling surfaces appearing, disappearing, or…
UIP2P: Unsupervised Instruction-based Image Editing via Cycle Edit Consistency
·3351 words·16 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision Image Generation 🏢 ETH Zurich
UIP2P: Unsupervised instruction-based image editing achieves high-fidelity edits by enforcing Cycle Edit Consistency, eliminating the need for ground-truth data.
LoRACLR: Contrastive Adaptation for Customization of Diffusion Models
·2785 words·14 mins· loading · loading
AI Generated 🤗 Daily Papers Computer Vision Image Generation 🏢 ETH Zurich
LoRACLR merges multiple LoRA models for high-fidelity multi-concept image generation, using a contrastive objective to ensure concept distinctiveness and prevent interference.
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models
·5261 words·25 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 ETH Zurich
LLMs’ hallucinations stem from entity recognition: SAEs reveal model ‘self-knowledge’, causally affecting whether it hallucinates or refuses to answer. This mechanism is even repurposed by chat finet…