Skip to main content

Posters

2024

Full-Atom Peptide Design with Geometric Latent Diffusion
·2511 words·12 mins· loading · loading
Machine Learning Deep Learning 🏢 Tsinghua University
PepGLAD, a novel generative model, revolutionizes full-atom peptide design by leveraging geometric latent diffusion to significantly enhance peptide diversity and binding affinity.
FUGAL: Feature-fortified Unrestricted Graph Alignment
·2409 words·12 mins· loading · loading
AI Theory Optimization 🏢 IIT Delhi
FUGAL: a groundbreaking graph alignment method surpassing state-of-the-art accuracy without compromising efficiency by directly aligning adjacency matrices.
FUG: Feature-Universal Graph Contrastive Pre-training for Graphs with Diverse Node Features
·2145 words·11 mins· loading · loading
Machine Learning Self-Supervised Learning 🏢 Tianjin University
FUG: A new graph contrastive pre-training strategy solves GNN transferability issues across datasets with diverse node features, achieving comparable performance to retraining while significantly impr…
FSP-Laplace: Function-Space Priors for the Laplace Approximation in Bayesian Deep Learning
·3503 words·17 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 Tübingen AI Center, University of Tübingen
FSP-LAPLACE efficiently integrates interpretable function-space priors into Bayesian deep learning via a novel Laplace approximation, significantly improving uncertainty estimates and model performanc…
Frustratingly Easy Test-Time Adaptation of Vision-Language Models
·2379 words·12 mins· loading · loading
Multimodal Learning Vision-Language Models 🏢 University of Trento
Boost VLM performance with ZERO: a simple, fast Test-Time Adaptation method requiring only a single forward pass and exceeding state-of-the-art accuracy!
Frozen-DETR: Enhancing DETR with Image Understanding from Frozen Foundation Models
·2491 words·12 mins· loading · loading
Computer Vision Object Detection 🏢 School of Computer Science and Engineering, Sun Yat-Sen University
Frozen-DETR boosts object detection accuracy by integrating frozen foundation models as feature enhancers, achieving significant performance gains without the computational cost of fine-tuning.
From Unstructured Data to In-Context Learning: Exploring What Tasks Can Be Learned and When
·1923 words·10 mins· loading · loading
Natural Language Processing Large Language Models 🏢 University of Michigan
LLMs’ in-context learning surprisingly arises from simple co-occurrence patterns in unstructured data, but positional information is key for complex tasks; ICL fails when patterns are unseen or fixed.
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion Models
·3334 words·16 mins· loading · loading
Computer Vision Image Generation 🏢 Tsinghua University
Diffusion models, while excelling in image generation, are vulnerable to data poisoning. This paper demonstrates a BadNets-like attack’s effectiveness against diffusion models, causing image misalign…
From Transparent to Opaque: Rethinking Neural Implicit Surfaces with $lpha$-NeuS
·1946 words·10 mins· loading · loading
Computer Vision 3D Vision 🏢 Key Laboratory of System Software (CAS) and State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences
α-NeuS: A novel method for neural implicit surface reconstruction that accurately reconstructs both transparent and opaque objects simultaneously by leveraging the unique properties of distance fields…
From Text to Trajectory: Exploring Complex Constraint Representation and Decomposition in Safe Reinforcement Learning
·3972 words·19 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏢 Beihang University
TTCT translates natural language constraints into effective training signals for safe reinforcement learning, enabling agents to learn safer policies with lower violation rates and zero-shot transfer …
From Similarity to Superiority: Channel Clustering for Time Series Forecasting
·4001 words·19 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 Yale University
Channel Clustering Module (CCM) boosts time series forecasting accuracy by intelligently grouping similar channels, improving model performance and generalization.
From News to Forecast: Integrating Event Analysis in LLM-Based Time Series Forecasting with Reflection
·3055 words·15 mins· loading · loading
AI Applications Finance 🏢 School of Electrical and Computer Engineering, the University of Sydney
Boost time series forecasting accuracy by integrating news data and LLM-based agents!
From Linear to Linearizable Optimization: A Novel Framework with Applications to Stationary and Non-stationary DR-submodular Optimization
·1591 words·8 mins· loading · loading
AI Theory Optimization 🏢 McGill University
A novel framework extends optimization algorithms from linear/quadratic functions to a broader class of ‘upper-linearizable’ functions, providing a unified approach for concave and DR-submodular optim…
From Instance Training to Instruction Learning: Task Adapters Generation from Instructions
·2311 words·11 mins· loading · loading
Natural Language Processing Large Language Models 🏢 Tencent AI Lab
TAGI, a novel method, generates task-specific adapters from instructions, enhancing LLM cross-task generalization by using knowledge distillation and a two-stage hypernetwork training process.
From Dictionary to Tensor: A Scalable Multi-View Subspace Clustering Framework with Triple Information Enhancement
·2898 words·14 mins· loading · loading
AI Generated Machine Learning Clustering 🏢 Hebei Normal University
STONE, a novel multi-view subspace clustering framework, enhances scalability and accuracy by introducing an anchor dictionary learning mechanism and triple information enhancement.
From Chaos to Clarity: 3DGS in the Dark
·2516 words·12 mins· loading · loading
Computer Vision 3D Vision 🏢 Nanyang Technology University
Researchers developed a self-supervised learning framework to create high-dynamic-range 3D Gaussian Splatting (3DGS) models from noisy raw images, significantly improving reconstruction quality and sp…
From Causal to Concept-Based Representation Learning
·1733 words·9 mins· loading · loading
AI Theory Representation Learning 🏢 Carnegie Mellon University
This paper introduces a novel geometric approach to concept-based representation learning, provably recovering interpretable concepts from diverse data without strict causal assumptions or many interv…
From Biased to Unbiased Dynamics: An Infinitesimal Generator Approach
·1735 words·9 mins· loading · loading
Machine Learning Deep Learning 🏢 Istituto Italiano Di Tecnologia
Learn unbiased molecular dynamics from limited biased data using a novel infinitesimal generator approach; accurately estimating eigenfunctions and eigenvalues even with suboptimal biasing.
From an Image to a Scene: Learning to Imagine the World from a Million 360° Videos
·2541 words·12 mins· loading · loading
AI Generated Computer Vision 3D Vision 🏢 University of Washington
ODIN, trained on a million 360° videos (360-1M), generates realistic novel views and reconstructs 3D scenes from single images.
Frieren: Efficient Video-to-Audio Generation Network with Rectified Flow Matching
·2541 words·12 mins· loading · loading
AI Generated Multimodal Learning Audio-Visual Learning 🏢 Zhejiang University
FRIEREN: a novel video-to-audio generation network using rectified flow matching achieves state-of-the-art performance by improving audio quality, temporal alignment, and generation efficiency.