Javascript must be enabled to continue!
Dynamic Prompt Fusion for Multi-Task and Cross-Domain Adaptation in LLMs
View through CrossRef
This study addresses the generalization limitations commonly observed in large language models under multi-task and cross-domain settings. Unlike prior methods such as SPoT, which depends on fixed prompt templates, our study introduces a unified multi-task learning framework with dynamic prompt scheduling mechanism. By introducing a prompt pool and a task-aware scheduling strategy, the method dynamically combines and aligns prompts for different tasks. This enhances the model’s ability to capture semantic differences across tasks. During prompt fusion, the model uses task embeddings and a gating mechanism to finely control the prompt signals. This ensures alignment between prompt content and task-specific demands. At the same time, it builds flexible sharing pathways across tasks. In addition, the proposed optimization objective centers on joint multi-task learning. It incorporates an automatic learning strategy for scheduling weights, which effectively mitigates task interference and negative transfer. To evaluate the effectiveness of the method, a series of sensitivity experiments were conducted. These experiments examined the impact of prompt temperature parameters and task number variation. The results confirm the advantages of the proposed mechanism in maintaining model stability and enhancing transferability. Experimental findings show that the prompt scheduling method significantly improves performance on a range of language understanding and knowledge reasoning tasks. These results fully demonstrate its applicability and effectiveness in unified multi-task modeling and cross-domain adaptation.
Title: Dynamic Prompt Fusion for Multi-Task and Cross-Domain Adaptation in LLMs
Description:
This study addresses the generalization limitations commonly observed in large language models under multi-task and cross-domain settings.
Unlike prior methods such as SPoT, which depends on fixed prompt templates, our study introduces a unified multi-task learning framework with dynamic prompt scheduling mechanism.
By introducing a prompt pool and a task-aware scheduling strategy, the method dynamically combines and aligns prompts for different tasks.
This enhances the model’s ability to capture semantic differences across tasks.
During prompt fusion, the model uses task embeddings and a gating mechanism to finely control the prompt signals.
This ensures alignment between prompt content and task-specific demands.
At the same time, it builds flexible sharing pathways across tasks.
In addition, the proposed optimization objective centers on joint multi-task learning.
It incorporates an automatic learning strategy for scheduling weights, which effectively mitigates task interference and negative transfer.
To evaluate the effectiveness of the method, a series of sensitivity experiments were conducted.
These experiments examined the impact of prompt temperature parameters and task number variation.
The results confirm the advantages of the proposed mechanism in maintaining model stability and enhancing transferability.
Experimental findings show that the prompt scheduling method significantly improves performance on a range of language understanding and knowledge reasoning tasks.
These results fully demonstrate its applicability and effectiveness in unified multi-task modeling and cross-domain adaptation.
Related Results
Exploring Large Language Models Integration in the Histopathologic Diagnosis of Skin Diseases: A Comparative Study
Exploring Large Language Models Integration in the Histopathologic Diagnosis of Skin Diseases: A Comparative Study
Abstract
Introduction
The exact manner in which large language models (LLMs) will be integrated into pathology is not yet fully comprehended. This study examines the accuracy, bene...
The Nuclear Fusion Award
The Nuclear Fusion Award
The Nuclear Fusion Award ceremony for 2009 and 2010 award winners was held during the 23rd IAEA Fusion Energy Conference in Daejeon. This time, both 2009 and 2010 award winners w...
Perspectives and Experiences With Large Language Models in Health Care: Survey Study
Perspectives and Experiences With Large Language Models in Health Care: Survey Study
Background
Large language models (LLMs) are transforming how data is used, including within the health care sector. However, frameworks including the Unified Theory of ...
Perspectives and Experiences With Large Language Models in Health Care: Survey Study (Preprint)
Perspectives and Experiences With Large Language Models in Health Care: Survey Study (Preprint)
BACKGROUND
Large language models (LLMs) are transforming how data is used, including within the health care sector. However, frameworks including the Unifie...
LLMs and AI: Understanding Its Reach and Impact
LLMs and AI: Understanding Its Reach and Impact
Large Language Models (LLMs) have revolutionized the field of Artificial Intelligence with their ability to understand and generate natural language discourse. This has led to the ...
Nonproliferation and fusion power plants
Nonproliferation and fusion power plants
Abstract
The world now appears to be on the brink of realizing commercial fusion. As fusion energy progresses towards near-term commercial deployment, the question arises a...
Multi-domain Feature Fusion Neural Network for Electrocardiogram Classification
Multi-domain Feature Fusion Neural Network for Electrocardiogram Classification
Abstract
Electrocardiogram (ECG) is of great significant in detecting cardiovascular disease. With the continuous development of computerized ECG interpretation technology,...
Continual Learning of Large Language Models: A Comprehensive Survey
Continual Learning of Large Language Models: A Comprehensive Survey
The challenge of effectively and efficiently adapting statically pre-trained Large Language Models (LLMs) to ever-evolving data distributions remains predominant. When tailored for...

