Posters
2024
fMRI predictors based on language models of increasing complexity recover brain left lateralization
·2912 words·14 mins·
loading
·
loading
Natural Language Processing
Large Language Models
π’ CNRS, EHESS
Larger language models better predict brain activity in fMRI studies, with left-hemisphere prediction significantly increasing as model complexity scales up, reconciling classic aphasia findings with …
FM-Delta: Lossless Compression for Storing Massive Fine-tuned Foundation Models
·3523 words·17 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
π’ Beijing University of Posts and Telecommunications
FM-Delta: Lossless compression halves cloud storage for massive fine-tuned language models, saving costs without sacrificing accuracy.
FlowTurbo: Towards Real-time Flow-Based Image Generation with Velocity Refiner
·1980 words·10 mins·
loading
·
loading
Computer Vision
Image Generation
π’ Tsinghua University
FlowTurbo: Blazing-fast, high-quality flow-based image generation via a velocity refiner!
FlowLLM: Flow Matching for Material Generation with Large Language Models as Base Distributions
·2004 words·10 mins·
loading
·
loading
AI Generated
Natural Language Processing
Large Language Models
π’ Meta AI
FlowLLM revolutionizes material design by cleverly merging large language models and Riemannian flow matching, yielding a 300% boost in stable material generation!
Flow Snapshot Neurons in Action: Deep Neural Networks Generalize to Biological Motion Perception
·2635 words·13 mins·
loading
·
loading
Computer Vision
Action Recognition
π’ College of Computing and Data Science, Nanyang Technological University, Singapore
Deep neural networks finally match human biological motion perception capabilities by leveraging patch-level optical flows and innovative neuron designs, achieving a 29% accuracy improvement.
Flow Priors for Linear Inverse Problems via Iterative Corrupted Trajectory Matching
·2171 words·11 mins·
loading
·
loading
Computer Vision
Image Generation
π’ UC Los Angeles
ICTM efficiently solves linear inverse problems using flow priors by iteratively optimizing local MAP objectives, outperforming other flow-based methods.
FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations
·1833 words·9 mins·
loading
·
loading
Natural Language Processing
Large Language Models
π’ University of Maryland
FLORA enables efficient & private federated fine-tuning of LLMs via novel stacking-based heterogeneous low-rank adaptation, surpassing existing methods.
FlexSBDD: Structure-Based Drug Design with Flexible Protein Modeling
·2072 words·10 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Princeton University
FlexSBDD, a novel deep generative model, accurately predicts flexible protein-ligand complex structures, generating high-affinity drug molecules while overcoming the limitations of rigid protein model…
FlexPlanner: Flexible 3D Floorplanning via Deep Reinforcement Learning in Hybrid Action Space with Multi-Modality Representation
·3516 words·17 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
π’ Dept. of CSE & School of AI & MoE Key Lab of AI, Shanghai Jiao Tong University
FlexPlanner: Deep reinforcement learning solves flexible 3D floorplanning, improving wirelength and alignment significantly.
Flexible mapping of abstract domains by grid cells via self-supervised extraction and projection of generalized velocity signals
·2121 words·10 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
π’ MIT
Brain’s flexible mapping of abstract domains is achieved via self-supervised extraction and projection of generalized velocity signals by grid cells, enabling efficient map generation.
Flexible Context-Driven Sensory Processing in Dynamical Vision Models
·2040 words·10 mins·
loading
·
loading
Computer Vision
Vision-Language Models
π’ MIT
Biologically-inspired DCnet neural network flexibly modulates visual processing based on context, outperforming existing models on visual search and attention tasks.
FlexCap: Describe Anything in Images in Controllable Detail
·2861 words·14 mins·
loading
·
loading
Multimodal Learning
Vision-Language Models
π’ Google DeepMind
FlexCap generates controllable, region-specific image descriptions of varying lengths, achieving state-of-the-art zero-shot visual question answering.
Flaws can be Applause: Unleashing Potential of Segmenting Ambiguous Objects in SAM
·2042 words·10 mins·
loading
·
loading
Computer Vision
Image Segmentation
π’ Chinese University of Hong Kong
A-SAM: Turning SAM’s inherent ambiguity into an advantage for controllable, diverse, and convincing ambiguous object segmentation.
Flatten Anything: Unsupervised Neural Surface Parameterization
·2390 words·12 mins·
loading
·
loading
Computer Vision
3D Vision
π’ Department of Computer Science, City University of Hong Kong
Flatten Anything Model (FAM) revolutionizes neural surface parameterization with unsupervised learning, handling complex topologies and unstructured data fully automatically.
FLAME : Factuality-Aware Alignment for Large Language Models
·2851 words·14 mins·
loading
·
loading
Natural Language Processing
Large Language Models
π’ University of Waterloo
FLAME: A novel alignment method enhances large language model factuality by addressing hallucination in supervised fine-tuning and reinforcement learning, resulting in more accurate and helpful AI ass…
Fixed Confidence Best Arm Identification in the Bayesian Setting
·1424 words·7 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
π’ UniversitΓ‘ Degli Studi Di Milano
Bayesian best-arm identification algorithm achieves near-optimal sample complexity by incorporating an early-stopping criterion.
First-Order Minimax Bilevel Optimization
·1619 words·8 mins·
loading
·
loading
AI Generated
Machine Learning
Meta Learning
π’ University at Buffalo
Two novel first-order algorithms, FOSL and MemCS, efficiently solve multi-block minimax bilevel optimization problems, significantly improving performance in deep AUC maximization and robust meta-lear…
First-Order Methods for Linearly Constrained Bilevel Optimization
·392 words·2 mins·
loading
·
loading
AI Theory
Optimization
π’ Weizmann Institute of Science
First-order methods conquer linearly constrained bilevel optimization, achieving near-optimal convergence rates and enhancing high-dimensional applicability.
First-Explore, then Exploit: Meta-Learning to Solve Hard Exploration-Exploitation Trade-Offs
·3099 words·15 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
π’ Department of Computer Science, University of British Columbia
Meta-RL agents often fail to explore effectively in environments where optimal behavior requires sacrificing immediate rewards for greater future gains. First-Explore, a novel method, tackles this by…
FineStyle: Fine-grained Controllable Style Personalization for Text-to-image Models
·2833 words·14 mins·
loading
·
loading
Multimodal Learning
Vision-Language Models
π’ Google DeepMind
FineStyle enables fine-grained controllable style personalization for text-to-image models using a novel concept-oriented data scaling and parameter-efficient adapter tuning, mitigating content leakag…