Skip to main content

Few-Shot Learning

Tri-Level Navigator: LLM-Empowered Tri-Level Learning for Time Series OOD Generalization
·1858 words·9 mins· loading · loading
Machine Learning Few-Shot Learning 🏒 Tongji University
LLM-powered Tri-level learning framework enhances time series OOD generalization.
Transformers Learn to Achieve Second-Order Convergence Rates for In-Context Linear Regression
·3338 words·16 mins· loading · loading
Machine Learning Few-Shot Learning 🏒 University of Southern California
Transformers surprisingly learn second-order optimization methods for in-context linear regression, achieving exponentially faster convergence than gradient descent!
Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series
·3445 words·17 mins· loading · loading
Machine Learning Few-Shot Learning 🏒 IBM Research
Tiny Time Mixers (TTMs) achieve state-of-the-art zero/few-shot multivariate time series forecasting, outperforming existing benchmarks while drastically reducing computational requirements.
Test-Time Adaptation Induces Stronger Accuracy and Agreement-on-the-Line
·2874 words·14 mins· loading · loading
Machine Learning Few-Shot Learning 🏒 Carnegie Mellon University
Test-time adaptation strengthens the linear correlation between in- and out-of-distribution accuracy, enabling precise OOD performance prediction and hyperparameter optimization without labeled OOD da…
Stepping Forward on the Last Mile
·2832 words·14 mins· loading · loading
Machine Learning Few-Shot Learning 🏒 Qualcomm AI Research
On-device training with fixed-point forward gradients enables efficient model personalization on resource-constrained edge devices, overcoming backpropagation’s memory limitations.
Prospective Representation Learning for Non-Exemplar Class-Incremental Learning
·2489 words·12 mins· loading · loading
Machine Learning Few-Shot Learning 🏒 Wuhan University
Prospective Representation Learning (PRL) revolutionizes non-exemplar class-incremental learning by proactively reserving embedding space for new classes and minimizing the shock of new data on previo…
Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs
·2558 words·13 mins· loading · loading
Machine Learning Few-Shot Learning 🏒 Caltech
CoDA-NO, a novel neural operator, revolutionizes multiphysics PDE solving via codomain tokenization, enabling efficient self-supervised pretraining and few-shot learning for superior generalization.
Pin-Tuning: Parameter-Efficient In-Context Tuning for Few-Shot Molecular Property Prediction
·4134 words·20 mins· loading · loading
AI Generated Machine Learning Few-Shot Learning 🏒 State Key Laboratory of Multimodal Artificial Intelligence Systems
Pin-Tuning: A parameter-efficient method for few-shot molecular property prediction that significantly improves accuracy with fewer trainable parameters via in-context tuning and Bayesian weight cons…
OTTER: Effortless Label Distribution Adaptation of Zero-shot Models
·2811 words·14 mins· loading · loading
Machine Learning Few-Shot Learning 🏒 Department of Computer Sciences University of Wisconsin-Madison
OTTER effortlessly adapts zero-shot models to new tasks by adjusting predictions using optimal transport, improving accuracy significantly without extra training data.
Mixture of Adversarial LoRAs: Boosting Robust Generalization in Meta-Tuning
·2777 words·14 mins· loading · loading
Computer Vision Few-Shot Learning 🏒 City University of Hong Kong
Boosting Robust Few-Shot Learning with Adversarial Meta-Tuning!
Mind the Gap Between Prototypes and Images in Cross-domain Finetuning
·3462 words·17 mins· loading · loading
Computer Vision Few-Shot Learning 🏒 Hong Kong Baptist University
CoPA improves cross-domain few-shot learning by adapting separate transformations for prototype and image embeddings, significantly enhancing performance and revealing better representation clusters.
Meta-Exploiting Frequency Prior for Cross-Domain Few-Shot Learning
·1996 words·10 mins· loading · loading
Computer Vision Few-Shot Learning 🏒 Northwestern Polytechnical University
Meta-Exploiting Frequency Prior enhances cross-domain few-shot learning by leveraging image frequency decomposition and consistency priors to improve model generalization and efficiency.
Knowledge Composition using Task Vectors with Learned Anisotropic Scaling
·4960 words·24 mins· loading · loading
AI Generated Computer Vision Few-Shot Learning 🏒 Australian Institute for Machine Learning
aTLAS: a novel parameter-efficient fine-tuning method using learned anisotropic scaling of task vectors for enhanced knowledge composition and transfer.
How Transformers Utilize Multi-Head Attention in In-Context Learning? A Case Study on Sparse Linear Regression
·2236 words·11 mins· loading · loading
AI Generated Machine Learning Few-Shot Learning 🏒 University of Hong Kong
Multi-head transformers utilize distinct attention patterns across layersβ€”multiple heads are essential for initial data preprocessing, while a single head suffices for subsequent optimization steps, o…
Generate Universal Adversarial Perturbations for Few-Shot Learning
·2582 words·13 mins· loading · loading
Machine Learning Few-Shot Learning 🏒 Huazhong University of Science and Technology
Researchers developed FSAFW, a novel framework generating universal adversarial perturbations effective against various Few-Shot Learning paradigms, surpassing baseline methods by over 16%.
Few-Shot Task Learning through Inverse Generative Modeling
·2587 words·13 mins· loading · loading
Machine Learning Few-Shot Learning 🏒 MIT
Few-shot task learning through inverse generative modeling (FTL-IGM) enables AI agents to quickly master new tasks from minimal data by leveraging invertible generative models.
Few-Shot Diffusion Models Escape the Curse of Dimensionality
·419 words·2 mins· loading · loading
Machine Learning Few-Shot Learning 🏒 Shanghai Jiao Tong University
Few-shot diffusion models efficiently generate customized images; this paper provides the first theoretical explanation, proving improved approximation and optimization bounds, escaping the curse of d…
Fast Graph Sharpness-Aware Minimization for Enhancing and Accelerating Few-Shot Node Classification
·3123 words·15 mins· loading · loading
AI Generated Machine Learning Few-Shot Learning 🏒 Hong Kong University of Science and Technology
Fast Graph Sharpness-Aware Minimization (FGSAM) accelerates few-shot node classification by cleverly combining GNNs and MLPs for efficient, high-performing training.
EPIC: Effective Prompting for Imbalanced-Class Data Synthesis in Tabular Data Classification via Large Language Models
·5652 words·27 mins· loading · loading
AI Generated Machine Learning Few-Shot Learning 🏒 KAIST
EPIC: Effective prompting makes LLMs generate high-quality synthetic tabular data, significantly boosting imbalanced-class classification.
D2R2: Diffusion-based Representation with Random Distance Matching for Tabular Few-shot Learning
·1776 words·9 mins· loading · loading
Machine Learning Few-Shot Learning 🏒 Hong Kong University of Science and Technology
D2R2: A novel diffusion-based model for tabular few-shot learning, achieves state-of-the-art results by leveraging semantic knowledge and distance matching.