Posters
2024
LACIE: Listener-Aware Finetuning for Calibration in Large Language Models
·2396 words·12 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 UNC Chapel Hill
LACIE: Listener-aware finetuning improves LLM confidence calibration, reducing incorrect answers accepted by human listeners by 47% while maintaining correct answer acceptance.
Label Noise: Ignorance Is Bliss
·3504 words·17 mins·
loading
·
loading
AI Generated
Machine Learning
Semi-Supervised Learning
🏢 University of Michigan
Ignorance is bliss: A new framework shows ignoring label noise in multi-class classification can achieve state-of-the-art performance, especially when using self-supervised feature extraction.
Label Delay in Online Continual Learning
·4705 words·23 mins·
loading
·
loading
AI Generated
Machine Learning
Continual Learning
🏢 University of Oxford
Bridging the accuracy gap in online continual learning caused by label delays, a new framework with Importance Weighted Memory Sampling prioritizes relevant memory samples, significantly outperforming…
L4GM: Large 4D Gaussian Reconstruction Model
·2618 words·13 mins·
loading
·
loading
Computer Vision
3D Vision
🏢 University of Toronto
L4GM: The first 4D model generating high-quality animated 3D objects from single-view videos in a single feed-forward pass.
L-TTA: Lightweight Test-Time Adaptation Using a Versatile Stem Layer
·2871 words·14 mins·
loading
·
loading
AI Generated
Computer Vision
Image Classification
🏢 Seoul National University of Science and Technology
L-TTA: A lightweight test-time adaptation method using a versatile stem layer minimizes channel-wise uncertainty for rapid and memory-efficient adaptation to new domains.
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
·5270 words·25 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
🏢 UC Berkeley
KVQuant achieves <0.1 perplexity degradation with 3-bit quantization in LLMs by using per-channel key quantization, pre-RoPE quantization, and non-uniform quantization, enabling 10M context length inf…
KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization
·3037 words·15 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Dept. of Computer Science, Rice University
Boost LLM inference speed 1.4-3.5x by using Coupled Quantization (CQ) to compress KV cache down to 1 bit per channel, while preserving model accuracy.
Kraken: Inherently Parallel Transformers For Efficient Multi-Device Inference
·2061 words·10 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Princeton University
Kraken: A new Transformer architecture boosts multi-device inference speed by 35.6% by cleverly overlapping communication with computation.
KptLLM: Unveiling the Power of Large Language Model for Keypoint Comprehension
·1673 words·8 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 University of Hong Kong
KptLLM: A novel multimodal model leverages LLMs for superior keypoint comprehension, outperforming existing methods in various benchmarks.
KOALA: Empirical Lessons Toward Memory-Efficient and Fast Diffusion Models for Text-to-Image Synthesis
·5238 words·25 mins·
loading
·
loading
Computer Vision
Image Generation
🏢 Electronics and Telecommunications Research Institute
KOALA: New efficient text-to-image diffusion models achieving 4x speed and 69% size reduction of SDXL, generating 1024px images on consumer GPUs with 8GB VRAM.
Knowledge-Empowered Dynamic Graph Network for Irregularly Sampled Medical Time Series
·3904 words·19 mins·
loading
·
loading
AI Generated
AI Applications
Healthcare
🏢 South China University of Technology
KEDGN, a novel graph neural network, leverages medical knowledge to model variable-specific temporal dependencies and dynamic inter-variable correlations in irregularly sampled medical time series, si…
Knowledge Graph Completion by Intermediate Variables Regularization
·2107 words·10 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 Fudan University
Novel intermediate variables regularization boosts knowledge graph completion!
Knowledge Composition using Task Vectors with Learned Anisotropic Scaling
·4960 words·24 mins·
loading
·
loading
AI Generated
Computer Vision
Few-Shot Learning
🏢 Australian Institute for Machine Learning
aTLAS: a novel parameter-efficient fine-tuning method using learned anisotropic scaling of task vectors for enhanced knowledge composition and transfer.
Knowledge Circuits in Pretrained Transformers
·3083 words·15 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 Zhejiang University
Researchers unveil ‘knowledge circuits’ within LLMs, revealing how knowledge is collaboratively encoded and utilized, leading to improved LLM design and interpretations of model behavior.
KnowGPT: Knowledge Graph based Prompting for Large Language Models
·1971 words·10 mins·
loading
·
loading
Natural Language Processing
Question Answering
🏢 Hong Kong Polytechnic University
KnowGPT: A novel framework boosts Large Language Model accuracy by intelligently integrating knowledge graphs, significantly reducing factual errors and achieving near-human performance on benchmark d…
KG-FIT: Knowledge Graph Fine-Tuning Upon Open-World Knowledge
·3104 words·15 mins·
loading
·
loading
Natural Language Processing
Large Language Models
🏢 University of Illinois at Urbana-Champaign
KG-FIT boosts knowledge graph embedding by smartly integrating open-world knowledge from LLMs, achieving significant performance gains.
KFNN: K-Free Nearest Neighbor For Crowdsourcing
·1335 words·7 mins·
loading
·
loading
AI Applications
Healthcare
🏢 China University of Geosciences
KFNN dynamically determines optimal neighborhood sizes for label integration in crowdsourcing, significantly boosting accuracy and robustness, especially with limited noisy labels.
Key-Grid: Unsupervised 3D Keypoints Detection using Grid Heatmap Features
·2426 words·12 mins·
loading
·
loading
Computer Vision
3D Vision
🏢 Peking University
Key-Grid: An unsupervised 3D keypoint detector achieving state-of-the-art semantic consistency and accuracy for both rigid and deformable objects using novel grid heatmap features.
Kernel-Based Function Approximation for Average Reward Reinforcement Learning: An Optimist No-Regret Algorithm
·311 words·2 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 MediaTek Research
Novel optimistic RL algorithm using kernel methods achieves no-regret performance in the challenging infinite-horizon average-reward setting.
Kernel PCA for Out-of-Distribution Detection
·2628 words·13 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 Shanghai Jiao Tong University
Boosting Out-of-Distribution Detection with Kernel PCA!