Javascript must be enabled to continue!
SCL: Selective Contrastive Learning for Data-driven Zero-shot Relation Extraction
View through CrossRef
Abstract
Relation extraction has evolved from supervised relation extraction to zero-shot setting due to the continuous emergence of newly generated relations. Some pioneering works handle zero-shot relation extraction by reformulating it into proxy tasks, such as reading comprehension and textual entailment. Nonetheless, the divergence in proxy task formulations from relation extraction hinders the acquisition of informative semantic representations, leading to subpar performance. Therefore, in this paper, we take a data-driven view to handle zero-shot relation extraction under a three-step paradigm, including encoder training, relation clustering, and summarization. Specifically, to train a discriminative relational encoder, we propose a novel selective contrastive learning framework, namely, SCL, where selective importance scores are assigned to distinguish the importance of different negative contrastive instances. During testing, the prompt-based encoder is employed to map test samples into representation vectors, which are then clustered into several groups. Typical samples closest to the cluster centroid are selected for summarization to generate the predicted relation for all samples in the cluster. Moreover, we design a simple non-parametric threshold plugin to reduce false-positive errors in inference on unseen relation representations. Our experiments demonstrate that SCL outperforms the current state-of-the-art method by over 3% across all metrics.
Title: SCL: Selective Contrastive Learning for Data-driven Zero-shot Relation Extraction
Description:
Abstract
Relation extraction has evolved from supervised relation extraction to zero-shot setting due to the continuous emergence of newly generated relations.
Some pioneering works handle zero-shot relation extraction by reformulating it into proxy tasks, such as reading comprehension and textual entailment.
Nonetheless, the divergence in proxy task formulations from relation extraction hinders the acquisition of informative semantic representations, leading to subpar performance.
Therefore, in this paper, we take a data-driven view to handle zero-shot relation extraction under a three-step paradigm, including encoder training, relation clustering, and summarization.
Specifically, to train a discriminative relational encoder, we propose a novel selective contrastive learning framework, namely, SCL, where selective importance scores are assigned to distinguish the importance of different negative contrastive instances.
During testing, the prompt-based encoder is employed to map test samples into representation vectors, which are then clustered into several groups.
Typical samples closest to the cluster centroid are selected for summarization to generate the predicted relation for all samples in the cluster.
Moreover, we design a simple non-parametric threshold plugin to reduce false-positive errors in inference on unseen relation representations.
Our experiments demonstrate that SCL outperforms the current state-of-the-art method by over 3% across all metrics.
Related Results
A comparison of the simultaneous application of sclerotherapy and rubber band ligation, with sclerotherapy and rubber band ligation applied separately, for the treatment of haemorrhoids: a prospective randomized trial
A comparison of the simultaneous application of sclerotherapy and rubber band ligation, with sclerotherapy and rubber band ligation applied separately, for the treatment of haemorrhoids: a prospective randomized trial
AbstractObjective  To compare simultaneous application of sclerotherapy and rubber band ligation, with sclerotherapy and rubber band ligation applied separately for the treatment o...
Anti-Scl-70 Antibodies in Autoimmune Hypothyroidism
Anti-Scl-70 Antibodies in Autoimmune Hypothyroidism
The relationship between autoimmune thyroiditis and systemic sclerosis is controversial. Data exist on the presence of thyroid autoantibodies in patients with systemic sclerosis bu...
Utilizing Large Language Models for Geoscience Literature Information Extraction
Utilizing Large Language Models for Geoscience Literature Information Extraction
Extracting information from unstructured and semi-structured geoscience literature is a crucial step in conducting geological research. The traditional machine learning extraction ...
Difference in Threshold between Sono- and Sonochemical Luminescence
Difference in Threshold between Sono- and Sonochemical Luminescence
The difference in threshold between sonoluminescence (SL) and sonochemical
luminescence (SCL) has been investigated. The intensity of both SL from distilled water and
SCL f...
Contrastive Distillation Learning with Sparse Spatial Aggregation
Contrastive Distillation Learning with Sparse Spatial Aggregation
Abstract
Contrastive learning has advanced significantly and demonstrates excellent transfer learning capabilities. Knowledge distillation is one of the most effective meth...
Improving Neural Retrieval with Contrastive Learning
Improving Neural Retrieval with Contrastive Learning
In recent years, neural retrieval models have shown remarkable progress in improving the efficiency and accuracy of information retrieval systems. However, challenges remain in eff...
Automatic Acquisition Method and Empirical Research of Shot Length in Chinese Films based on Machine Vision
Automatic Acquisition Method and Empirical Research of Shot Length in Chinese Films based on Machine Vision
Abstract
The measurement of shot length is an essential index for the evaluation of cinematographic research. Given the limitations of existing measurement tools, which req...
Evaluation of Prompting Strategies for Cyberbullying Detection Using Various Large Language Models
Evaluation of Prompting Strategies for Cyberbullying Detection Using Various Large Language Models
Sentiment analysis detects toxic language for safer online spaces and helps businesses refine
strategies through customer feedback analysis [1, 2]. Advancements in Large Language
M...

