Javascript must be enabled to continue!
Towards Multimodal Continual Knowledge Embedding with Modality Forgetting Modulation
View through CrossRef
The continuous emergence of new entities, relations, triples, and multimodal information drives the dynamic evolution of multimodal knowledge graph (MMKG). However, existing MMKG embedding models follow a static setting, where training from scratch for growing MMKG wastes learned knowledge, while fine-tuning on new knowledge easily leads to catastrophic forgetting, severely limiting their applicability in real-world scenarios. To address this, we propose a multimodal continual representation learning framework (MoFot) for growing MMKG. Unlike existing static multimodal embedding methods, MoFot focuses on alleviating catastrophic forgetting rather than retraining to adapt to new knowledge. Specifically, MoFot effectively mitigates catastrophic forgetting caused by parameter updates and differing forgetting rates across modalities through a multimodal collaborative modulation mechanism. The mechanism ensures consistent retention of previously learned multimodal knowledge across snapshots through multimodal weight modulation and multimodal feature modulation. MoFot outperforms existing MMKG embedding, KG continual learning, and MMKG inductive models. Experimental results demonstrate that MoFot not only avoids forgetting but also enhances old knowledge by learning new knowledge, achieving adaptation to new knowledge while mitigating forgetting of old knowledge.
Association for the Advancement of Artificial Intelligence (AAAI)
Title: Towards Multimodal Continual Knowledge Embedding with Modality Forgetting Modulation
Description:
The continuous emergence of new entities, relations, triples, and multimodal information drives the dynamic evolution of multimodal knowledge graph (MMKG).
However, existing MMKG embedding models follow a static setting, where training from scratch for growing MMKG wastes learned knowledge, while fine-tuning on new knowledge easily leads to catastrophic forgetting, severely limiting their applicability in real-world scenarios.
To address this, we propose a multimodal continual representation learning framework (MoFot) for growing MMKG.
Unlike existing static multimodal embedding methods, MoFot focuses on alleviating catastrophic forgetting rather than retraining to adapt to new knowledge.
Specifically, MoFot effectively mitigates catastrophic forgetting caused by parameter updates and differing forgetting rates across modalities through a multimodal collaborative modulation mechanism.
The mechanism ensures consistent retention of previously learned multimodal knowledge across snapshots through multimodal weight modulation and multimodal feature modulation.
MoFot outperforms existing MMKG embedding, KG continual learning, and MMKG inductive models.
Experimental results demonstrate that MoFot not only avoids forgetting but also enhances old knowledge by learning new knowledge, achieving adaptation to new knowledge while mitigating forgetting of old knowledge.
Related Results
Multimodal Emotion Recognition and Human Computer Interaction for AI-Driven Mental Health Support (Preprint)
Multimodal Emotion Recognition and Human Computer Interaction for AI-Driven Mental Health Support (Preprint)
BACKGROUND
Mental health has become one of the most urgent global health issues of the twenty-first century. The World Health Organization (WHO) reports tha...
Continual Learning of Large Language Models: A Comprehensive Survey
Continual Learning of Large Language Models: A Comprehensive Survey
The challenge of effectively and efficiently adapting statically pre-trained Large Language Models (LLMs) to ever-evolving data distributions remains predominant. When tailored for...
Imagined worldviews in John Lennon’s “Imagine”: a multimodal re-performance / Visões de mundo imaginadas no “Imagine” de John Lennon: uma re-performance multimodal
Imagined worldviews in John Lennon’s “Imagine”: a multimodal re-performance / Visões de mundo imaginadas no “Imagine” de John Lennon: uma re-performance multimodal
Abstract: This paper addresses the issue of multimodal re-performance, a concept developed by us, in view of the fact that the famous song “Imagine”, by John Lennon, was published ...
Association of Accelerated Long-term Forgetting and Senescence-related Blood-borne Factors in Asymptomatic Individuals From Families With Autosomal Dominant Alzheimer’s Disease
Association of Accelerated Long-term Forgetting and Senescence-related Blood-borne Factors in Asymptomatic Individuals From Families With Autosomal Dominant Alzheimer’s Disease
Abstract
Background: Accelerated long-term forgetting has been identified in preclinical Alzheimer’s disease (AD), and is attributed to a selective impairment of memory con...
Some Functions of Collective Forgetting
Some Functions of Collective Forgetting
Coerced forgetting — forgetting as repressive erasure — has been a hallmark of many of the totalitarian regimes of the 20th century. However, the act of forgetting is not always ne...
Memory that lasts
Memory that lasts
Catastrophic forgetting, the tendency of learning systems to lose previously acquired knowledge when trained on new information, remains one of the most fundamental challenges in c...
Automatic Modulation Recognition Method Basedon Multimodal I/Q-FRFT Fusion
Automatic Modulation Recognition Method Basedon Multimodal I/Q-FRFT Fusion
Abstract
Automatic modulation recognition (AMR) is a key technology in the domain of cognitive radio communications. Accurately identifying the modulation schemes of signal...
AFR-BERT: Attention-based mechanism feature relevance fusion multimodal sentiment analysis model
AFR-BERT: Attention-based mechanism feature relevance fusion multimodal sentiment analysis model
Multimodal sentiment analysis is an essential task in natural language processing which refers to the fact that machines can analyze and recognize emotions through logical reasonin...

