Skip to main content

Posters

2024

DeepITE: Designing Variational Graph Autoencoders for Intervention Target Estimation
·2107 words·10 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 Ant Group
DeepITE: a novel variational graph autoencoder, efficiently estimates intervention targets from both labeled and unlabeled data, surpassing existing methods in recall and inference speed.
DeepDRK: Deep Dependency Regularized Knockoff for Feature Selection
·2510 words·12 mins· loading · loading
Machine Learning Deep Learning 🏢 University of Illinois at Urbana-Champaign
DeepDRK, a novel deep learning approach, significantly improves feature selection by effectively balancing false discovery rate and power, surpassing existing methods, especially with limited data.
Deep Policy Gradient Methods Without Batch Updates, Target Networks, or Replay Buffers
·3268 words·16 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 University of Alberta
Deep RL excels in simulated robotics, but struggles with real-world limitations like limited computational resources. This paper introduces Action Value Gradient (AVG), a novel incremental deep polic…
Deep linear networks for regression are implicitly regularized towards flat minima
·2602 words·13 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Institute of Mathematics
Deep linear networks implicitly regularize towards flat minima, with sharpness (Hessian’s largest eigenvalue) of minimizers linearly increasing with depth but bounded by a constant times the lower bou…
Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond
·3053 words·15 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 University of Cambridge
A simple, yet accurate model unveils deep learning’s mysteries, providing empirical insights into grokking, double descent, and gradient boosting, offering a new lens for analyzing neural network beha…
Deep Learning in Medical Image Registration: Magic or Mirage?
·1588 words·8 mins· loading · loading
AI Applications Healthcare 🏢 Penn Image Computing and Science Laboratory
Deep learning (DL) image registration methods sometimes underperform classical methods, especially when data distribution shifts; this study reveals when each approach excels.
Deep Homomorphism Networks
·1657 words·8 mins· loading · loading
AI Theory Generalization 🏢 Roku, Inc.
Deep Homomorphism Networks (DHNs) boost graph neural network (GNN) expressiveness by efficiently detecting subgraph patterns using a novel graph homomorphism layer.
Deep Graph Neural Networks via Posteriori-Sampling-based Node-Adaptative Residual Module
·2126 words·10 mins· loading · loading
Machine Learning Deep Learning 🏢 Westlake University
PSNR, a novel node-adaptive residual module, significantly improves deep GNN performance by mitigating over-smoothing and handling missing data.
Deep Graph Mating
·1581 words·8 mins· loading · loading
Machine Learning Transfer Learning 🏢 University of Sydney
Deep Graph Mating (GRAMA) enables training-free knowledge transfer in GNNs, achieving results comparable to pre-trained models without retraining or labeled data.
Deep Equilibrium Algorithmic Reasoning
·2322 words·11 mins· loading · loading
Machine Learning Deep Learning 🏢 University of Cambridge
Deep Equilibrium Algorithmic Reasoners (DEARs) achieve superior performance on algorithmic tasks by directly solving for the equilibrium point of a neural network, eliminating the need for iterative r…
Deep Correlated Prompting for Visual Recognition with Missing Modalities
·1823 words·9 mins· loading · loading
Multimodal Learning Vision-Language Models 🏢 College of Intelligence and Computing, Tianjin University
Deep Correlated Prompting enhances large multimodal models’ robustness against missing data by leveraging inter-layer and cross-modality correlations in prompts, achieving superior performance with mi…
Deep Bayesian Active Learning for Preference Modeling in Large Language Models
·2339 words·11 mins· loading · loading
Natural Language Processing Large Language Models 🏢 University of Oxford
BAL-PM, a novel active learning approach, drastically reduces human feedback in LLM preference modeling by leveraging both model uncertainty and prompt distribution diversity, achieving 33%-68% fewer …
DECRL: A Deep Evolutionary Clustering Jointed Temporal Knowledge Graph Representation Learning Approach
·2468 words·12 mins· loading · loading
AI Generated Machine Learning Representation Learning 🏢 Zhejiang University
DECRL: A novel deep learning approach for temporal knowledge graph representation learning, capturing high-order correlation evolution and outperforming existing methods.
Decoupling Semantic Similarity from Spatial Alignment for Neural Networks.
·2318 words·11 mins· loading · loading
Computer Vision Representation Learning 🏢 Google DeepMind
Researchers developed semantic RSMs, a novel approach to measure semantic similarity in neural networks, improving image retrieval and aligning network representations with predicted class probabiliti…
Decoupled Kullback-Leibler Divergence Loss
·2254 words·11 mins· loading · loading
Computer Vision Image Classification 🏢 the Chinese University of Hong Kong
Improved Kullback-Leibler (IKL) divergence loss achieves state-of-the-art adversarial robustness and competitive knowledge distillation performance by addressing KL loss’s limitations.
Decomposing and Interpreting Image Representations via Text in ViTs Beyond CLIP
·2855 words·14 mins· loading · loading
Computer Vision Vision-Language Models 🏢 University of Maryland, College Park
This paper presents a general framework for interpreting Vision Transformer (ViT) components, mapping their contributions to CLIP space for textual interpretation, and introduces a scoring function fo…
Decomposed Prompt Decision Transformer for Efficient Unseen Task Generalization
·2344 words·12 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 Wuhan University
Decomposed Prompt Decision Transformer (DPDT) efficiently learns prompts for unseen tasks using a two-stage paradigm, achieving superior performance in multi-task offline reinforcement learning.
Decomposable Transformer Point Processes
·2120 words·10 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 University of Cambridge
Decomposable Transformer Point Processes (DTPP) dramatically accelerates marked point process inference by using a mixture of log-normals for inter-event times and Transformers for marks, outperformin…
Decoding-Time Language Model Alignment with Multiple Objectives
·3392 words·16 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Tsinghua University
Multi-objective decoding (MOD) efficiently aligns language models to diverse user needs by decoding the next token from a weighted combination of predictions from multiple base models trained on indiv…
Decision-Making Behavior Evaluation Framework for LLMs under Uncertain Context
·2519 words·12 mins· loading · loading
Natural Language Processing Large Language Models 🏢 University of Illinois at Urbana-Champaign
New framework reveals LLMs’ human-like decision-making tendencies but highlights significant variations and biases influenced by demographic factors, underscoring ethical deployment needs.