Skip to main content

Posters

2024

Intruding with Words: Towards Understanding Graph Injection Attacks at the Text Level
·5345 words·26 mins· loading · loading
AI Theory Robustness 🏢 Renmin University of China
Researchers unveil text-level graph injection attacks, revealing a new vulnerability in GNNs and highlighting the importance of text interpretability in attack success.
Introspective Planning: Aligning Robots' Uncertainty with Inherent Task Ambiguity
·2643 words·13 mins· loading · loading
AI Applications Robotics 🏢 Princeton University
Robots using LLMs for task planning often make unsafe or wrong decisions due to LLM hallucination and ambiguity in instructions. This paper introduces ‘introspective planning,’ a novel method that us…
Introducing Spectral Attention for Long-Range Dependency in Time Series Forecasting
·3194 words·15 mins· loading · loading
Machine Learning Deep Learning 🏢 Seoul National University
Spectral Attention boosts long-range dependency capture in time series forecasting, achieving state-of-the-art results across various models and datasets.
Intrinsic Robustness of Prophet Inequality to Strategic Reward Signaling
·248 words·2 mins· loading · loading
AI Generated AI Theory Robustness 🏢 Chinese University of Hong Kong
Strategic players can manipulate reward signals, but simple threshold policies still achieve a surprisingly good approximation to the optimal prophet value, even in this more realistic setting.
IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors
·2763 words·13 mins· loading · loading
AI Generated Machine Learning Semi-Supervised Learning 🏢 Massive Data Computing Lab, Harbin Institute of Technology
IntraMix: Boost GNN accuracy by cleverly generating high-quality labels and enriching node neighborhoods using intra-class Mixup.
Interventionally Consistent Surrogates for Complex Simulation Models
·1862 words·9 mins· loading · loading
AI Generated AI Theory Causality 🏢 University of Oxford
This paper introduces a novel framework for creating interventionally consistent surrogate models for complex simulations, addressing computational limitations and ensuring accurate policy evaluation.
Interventional Causal Discovery in a Mixture of DAGs
·1892 words·9 mins· loading · loading
AI Generated AI Theory Causality 🏢 Carnegie Mellon University
This study presents CADIM, an adaptive algorithm using interventions to learn true causal relationships from mixtures of DAGs, achieving near-optimal intervention sizes and providing quantifiable opti…
Intervention and Conditioning in Causal Bayesian Networks
·296 words·2 mins· loading · loading
AI Theory Causality 🏢 Cornell University
Researchers uniquely estimate probabilities in Causal Bayesian Networks using simple independence assumptions, enabling analysis from observational data and simplifying counterfactual probability calc…
Interpreting the Weight Space of Customized Diffusion Models
·3822 words·18 mins· loading · loading
Computer Vision Image Generation 🏢 UC Berkeley
Researchers model a manifold of customized diffusion models as a subspace of weights, enabling controllable creation of new models via sampling, editing, and inversion from a single image.
Interpreting Learned Feedback Patterns in Large Language Models
·2900 words·14 mins· loading · loading
Natural Language Processing Large Language Models 🏢 University of Oxford
Researchers developed methods to measure and interpret the divergence between learned feedback patterns (LFPs) in LLMs and human preferences, helping minimize discrepancies between LLM behavior and tr…
Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE)
·4104 words·20 mins· loading · loading
AI Generated Multimodal Learning Vision-Language Models 🏢 Harvard University
SpLiCE unlocks CLIP’s potential by transforming its dense, opaque representations into sparse, human-interpretable concept embeddings.
Interpreting and Analysing CLIP's Zero-Shot Image Classification via Mutual Knowledge
·3358 words·16 mins· loading · loading
Multimodal Learning Vision-Language Models 🏢 Vrije Universiteit Brussel
CLIP’s zero-shot image classification decisions are made interpretable using a novel mutual-knowledge approach based on textual concepts, demonstrating effective and human-friendly analysis across div…
Interpretable Mesomorphic Networks for Tabular Data
·2985 words·15 mins· loading · loading
Machine Learning Interpretability 🏢 University of Freiburg
Interpretable Mesomorphic Neural Networks (IMNs) achieve accuracy comparable to black-box models while offering free-lunch explainability for tabular data through instance-specific linear models gener…
Interpretable Lightweight Transformer via Unrolling of Learned Graph Smoothness Priors
·1664 words·8 mins· loading · loading
AI Generated Computer Vision Image Generation 🏢 York University
Interpretable lightweight transformers are built by unrolling graph smoothness priors, achieving high performance with significantly fewer parameters than conventional transformers.
Interpretable Image Classification with Adaptive Prototype-based Vision Transformers
·5008 words·24 mins· loading · loading
Computer Vision Image Classification 🏢 Dartmouth College
ProtoViT: a novel interpretable image classification method using Vision Transformers and adaptive prototypes, achieving higher accuracy and providing clear explanations.
Interpretable Generalized Additive Models for Datasets with Missing Values
·2769 words·13 mins· loading · loading
Machine Learning Interpretability 🏢 Duke University
M-GAM: Interpretable additive models handling missing data with superior accuracy & sparsity!
Interpretable Concept-Based Memory Reasoning
·2660 words·13 mins· loading · loading
AI Theory Interpretability 🏢 KU Leuven
CMR: A novel Concept-Based Memory Reasoner delivers human-understandable, verifiable AI task predictions by using a neural selection mechanism over a set of human-understandable logic rules, achievin…
Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents
·2372 words·12 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 Computer Science Department, TU Darmstadt
Successive Concept Bottleneck Agents (SCoBots) improve reinforcement learning by integrating interpretable layers, enabling concept-level inspection and human-in-the-loop revisions to fix misalignment…
Interpolating Item and User Fairness in Multi-Sided Recommendations
·1620 words·8 mins· loading · loading
AI Theory Fairness 🏢 MIT
Problem (FAIR) framework and FORM algorithm achieve flexible multi-stakeholder fairness in online recommendation systems, balancing platform revenue with user and item fairness.
InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD
·2071 words·10 mins· loading · loading
Multimodal Learning Vision-Language Models 🏢 Shanghai Artificial Intelligence Laboratory
InternLM-XComposer2-4KHD pioneers high-resolution image understanding in LVLMs, scaling processing from 336 pixels to 4K HD and beyond, achieving state-of-the-art results on multiple benchmarks.