🏢 University of Texas at Austin
Understanding and Mitigating Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing
·3334 words·16 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Natural Language Processing
Large Language Models
🏢 University of Texas at Austin
Polarizing SSMs’ state transition matrices enhances long-range dependency modeling by mitigating recency bias and over-smoothing.
Learned Compression for Compressed Learning
·2966 words·14 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Computer Vision
Image Classification
🏢 University of Texas at Austin
WaLLOC: a novel neural codec boosts compressed-domain learning by combining wavelet transforms with asymmetric autoencoders, achieving high compression ratios with minimal computation and uniform dime…