Skip to main content

🏢 Imperial College London

Adaptive Audio-Visual Speech Recognition via Matryoshka-Based Multimodal LLMs
·2695 words·13 mins· loading · loading
AI Generated 🤗 Daily Papers Multimodal Learning Audio-Visual Learning 🏢 Imperial College London
Llama-MTSK: AVSR via Matryoshka LLMs, adapting to computational limits without sacrificing accuracy!
Hardware and Software Platform Inference
·2667 words·13 mins· loading · loading
AI Generated 🤗 Daily Papers Natural Language Processing Large Language Models 🏢 Imperial College London
Researchers developed Hardware and Software Platform Inference (HSPI) to identify the underlying GPU and software stack used to serve LLMs, enhancing transparency in the industry.