Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Attention-enabled Multi-layer Subword Joint Learning for Chinese Word Embedding

View through CrossRef
Abstract In recent years, Chinese word embeddings have attracted significant attention in the field of natural language processing (NLP). The complex structures and diverse influences of Chinese characters present distinct challenges for semantic representation. As a result, Chinese word embeddings are primarily investigated in conjunction with characters and their subcomponents. Previous research has demonstrated that word vectors frequently fail to capture the subtle semantics embedded within the complex structure of Chinese characters. Furthermore, they often neglect the varying contributions of subword information to semantics at different levels. To tackle these challenges, we present a weight-based word vector model that takes into account the internal structure of Chinese words at various levels. The model further categorizes the internal structure of Chinese words into six layers of subword information: words, characters, components, pinyin, strokes, and structures. The semantics of Chinese words can be derived by integrating the subword information from various layers. Moreover, the model considers the varying contributions of each subword layer to the semantics of Chinese words. It utilizes an attention mechanism to determine the weights between and within the subword layers, facilitating the comprehensive extraction of word semantics. The word-level subwords act as the attention mechanism query for subwords in other layers to learn semantic bias. Experimental results show that the proposed word vector model achieves enhancements in various evaluation metrics, such as word similarity, word analogy, text categorization, and case studies.
Title: Attention-enabled Multi-layer Subword Joint Learning for Chinese Word Embedding
Description:
Abstract In recent years, Chinese word embeddings have attracted significant attention in the field of natural language processing (NLP).
The complex structures and diverse influences of Chinese characters present distinct challenges for semantic representation.
As a result, Chinese word embeddings are primarily investigated in conjunction with characters and their subcomponents.
Previous research has demonstrated that word vectors frequently fail to capture the subtle semantics embedded within the complex structure of Chinese characters.
Furthermore, they often neglect the varying contributions of subword information to semantics at different levels.
To tackle these challenges, we present a weight-based word vector model that takes into account the internal structure of Chinese words at various levels.
The model further categorizes the internal structure of Chinese words into six layers of subword information: words, characters, components, pinyin, strokes, and structures.
The semantics of Chinese words can be derived by integrating the subword information from various layers.
Moreover, the model considers the varying contributions of each subword layer to the semantics of Chinese words.
It utilizes an attention mechanism to determine the weights between and within the subword layers, facilitating the comprehensive extraction of word semantics.
The word-level subwords act as the attention mechanism query for subwords in other layers to learn semantic bias.
Experimental results show that the proposed word vector model achieves enhancements in various evaluation metrics, such as word similarity, word analogy, text categorization, and case studies.

Related Results

Differential Diagnosis of Neurogenic Thoracic Outlet Syndrome: A Review
Differential Diagnosis of Neurogenic Thoracic Outlet Syndrome: A Review
Abstract Thoracic outlet syndrome (TOS) is a complex and often overlooked condition caused by the compression of neurovascular structures as they pass through the thoracic outlet. ...
Effective Attributed Network Embedding with Information Behavior Extraction
Effective Attributed Network Embedding with Information Behavior Extraction
Abstract Network embedding has shown its effectiveness in many tasks such as link prediction, node classification, and community detection. Most attributed network embeddin...
Effective attributed network embedding with information behavior extraction
Effective attributed network embedding with information behavior extraction
Network embedding has shown its effectiveness in many tasks, such as link prediction, node classification, and community detection. Most attributed network embedding methods consid...

Back to Top