Posters
2024
LaSe-E2V: Towards Language-guided Semantic-aware Event-to-Video Reconstruction
·2343 words·11 mins·
loading
·
loading
Multimodal Learning
Vision-Language Models
π’ Hong Kong University of Science and Technology
LaSe-E2V: Language-guided semantic-aware event-to-video reconstruction uses text descriptions to improve video quality and consistency.
LaSCal: Label-Shift Calibration without target labels
·3140 words·15 mins·
loading
·
loading
Machine Learning
Unsupervised Learning
π’ ESAT-PSI, KU Leuven
LaSCal, a novel label-free calibration method, ensures reliable model predictions under label shift by using a consistent calibration error estimator, achieving effective and robust unsupervised calib…
Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks: Margin Improvement and Fast Optimization
·2166 words·11 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
π’ UC Berkeley
Large stepsize GD on non-homogeneous neural networks shows monotonic risk reduction after an initial oscillating phase, demonstrating implicit bias and optimization gains.
Large Spatial Model: End-to-end Unposed Images to Semantic 3D
·1766 words·9 mins·
loading
·
loading
Computer Vision
3D Vision
π’ NVIDIA Research
Large Spatial Model (LSM) achieves real-time semantic 3D reconstruction from just two unposed images, unifying multiple 3D vision tasks in a single feed-forward pass.
Large Scale Transfer Learning for Tabular Data via Language Modeling
·2834 words·14 mins·
loading
·
loading
Machine Learning
Transfer Learning
π’ University of Washington
TABULA-8B, a novel language model for tabular prediction, achieves state-of-the-art zero-shot and few-shot performance across various benchmarks, exceeding existing methods by 5-15 percentage points.
Large Pre-trained time series models for cross-domain Time series analysis tasks
·1870 words·9 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
π’ Georgia Institute of Technology
Large Pre-trained Time-series Models (LPTM) achieves superior forecasting and time-series classification results using a novel adaptive segmentation method, requiring up to 40% less data and 50% less …
Large Language Models-guided Dynamic Adaptation for Temporal Knowledge Graph Reasoning
·2160 words·11 mins·
loading
·
loading
Natural Language Processing
Large Language Models
π’ Beijing University of Technology
LLM-DA dynamically adapts LLM-generated rules for accurate, interpretable temporal knowledge graph reasoning, significantly improving accuracy without fine-tuning.
Large Language Models Play StarCraft II:Benchmarks and A Chain of Summarization Approach
·4380 words·21 mins·
loading
·
loading
AI Applications
Gaming
π’ AI Centre, Department of Computer Science, UCL
LLMs conquer StarCraft II: A new benchmark and Chain of Summarization method enable real-time strategic gameplay evaluation, showcasing impressive LLM strategic abilities.
Large Language Models Must Be Taught to Know What They Donβt Know
·3020 words·15 mins·
loading
·
loading
Natural Language Processing
Large Language Models
π’ New York University
Teach LLMs uncertainty for reliable high-stakes predictions: Fine-tuning with graded examples significantly improves LLM’s uncertainty calibration and generalizes well.
Large Language Models as Urban Residents: An LLM Agent Framework for Personal Mobility Generation
·2032 words·10 mins·
loading
·
loading
AI Applications
Smart Cities
π’ University of Tokyo
LLM agents effectively generate realistic personal mobility patterns using semantically rich data.
Large language model validity via enhanced conformal prediction methods
·2089 words·10 mins·
loading
·
loading
Natural Language Processing
Large Language Models
π’ Stanford University
New conformal inference methods enhance LLM validity by providing adaptive validity guarantees and improving the quality of LLM outputs, addressing prior methods’ limitations.
Large Language Model Unlearning via Embedding-Corrupted Prompts
·7618 words·36 mins·
loading
·
loading
Natural Language Processing
Large Language Models
π’ UC Santa Cruz
ECO prompts enable efficient LLM unlearning by corrupting prompts flagged for forgetting, achieving promising results across various LLMs and tasks with minimal side effects.
Large Language Model Unlearning
·6002 words·29 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
π’ Meta GenAI
This paper presents a novel method for large language model (LLM) unlearning, enabling LLMs to ‘forget’ undesirable behaviors by using only negative examples. This computationally efficient approach o…
Language-Driven Interactive Traffic Trajectory Generation
·2233 words·11 mins·
loading
·
loading
AI Applications
Autonomous Vehicles
π’ Shanghai Jiao Tong University
InteractTraj: Generating realistic, interactive traffic trajectories from natural language!
Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior Models
·2064 words·10 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
π’ CISPA Helmholtz Center for Information Security
Large language models (LLMs) achieve lossless gradient compression, surpassing existing methods by up to 17.2%, thereby advancing distributed learning efficiency.
Language Models as Hierarchy Encoders
·2232 words·11 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
π’ University of Oxford
Language models struggle with hierarchical information. This work introduces Hierarchy Transformer Encoders (HITs), a novel method to retrain transformer encoders using hyperbolic geometry and special…
Language Grounded Multi-agent Reinforcement Learning with Human-interpretable Communication
·2019 words·10 mins·
loading
·
loading
Natural Language Processing
Human-AI Interaction
π’ University of Pittsburgh
LangGround: MARL agents learn human-interpretable communication via LLM-grounded training, enabling effective human-agent collaboration.
Lambda: Learning Matchable Prior For Entity Alignment with Unlabeled Dangling Cases
·2851 words·14 mins·
loading
·
loading
AI Generated
Natural Language Processing
Named Entity Recognition
π’ Shanghai Jiao Tong University
Lambda: A novel framework tackles entity alignment challenges with unlabeled dangling entities using GNN-based encoding, spectral contrastive learning, and an iterative PU learning algorithm, achievin…
LAM3D: Large Image-Point Clouds Alignment Model for 3D Reconstruction from Single Image
·2617 words·13 mins·
loading
·
loading
AI Generated
Computer Vision
3D Vision
π’ Australian National University
LAM3D: A novel framework uses point cloud data to boost single-image 3D mesh reconstruction accuracy, achieving state-of-the-art results in just 6 seconds.
LaKD: Length-agnostic Knowledge Distillation for Trajectory Prediction with Any Length Observations
·1999 words·10 mins·
loading
·
loading
AI Applications
Autonomous Vehicles
π’ Beijing Institute of Technology
LaKD: a novel length-agnostic knowledge distillation framework enables accurate trajectory prediction regardless of observation length, overcoming limitations of existing methods.