TY - GEN
T1 - Improving Zero-shot Relation Classification via Automatically-acquired Entailment Templates
AU - Rahimi, Mahdi
AU - Surdeanu, Mihai
N1 - Publisher Copyright: © 2023 Association for Computational Linguistics.
PY - 2023
Y1 - 2023
N2 - While fully supervised relation classification (RC) models perform well on large-scale datasets, their performance drops drastically in low-resource settings. As generating annotated examples are expensive, recent zero-shot methods have been proposed that reformulate RC into other NLP tasks for which supervision exists such as textual entailment. However, these methods rely on templates that are manually created which is costly and requires domain expertise. In this paper, we present a novel strategy for template generation for relation classification, which is based on adapting Harris’ distributional similarity principle to templates encoded using contextualized representations. Further, we perform empirical evaluation of different strategies for combining the automatically acquired templates with manual templates. The experimental results on TACRED show that our approach not only performs better than the zero-shot RC methods that only use manual templates, but also that it achieves state-of-the-art performance for zero-shot TACRED at 64.3 F1 score.
AB - While fully supervised relation classification (RC) models perform well on large-scale datasets, their performance drops drastically in low-resource settings. As generating annotated examples are expensive, recent zero-shot methods have been proposed that reformulate RC into other NLP tasks for which supervision exists such as textual entailment. However, these methods rely on templates that are manually created which is costly and requires domain expertise. In this paper, we present a novel strategy for template generation for relation classification, which is based on adapting Harris’ distributional similarity principle to templates encoded using contextualized representations. Further, we perform empirical evaluation of different strategies for combining the automatically acquired templates with manual templates. The experimental results on TACRED show that our approach not only performs better than the zero-shot RC methods that only use manual templates, but also that it achieves state-of-the-art performance for zero-shot TACRED at 64.3 F1 score.
UR - http://www.scopus.com/inward/record.url?scp=85174526097&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85174526097&partnerID=8YFLogxK
M3 - Conference contribution
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 187
EP - 195
BT - ACL 2023 - 8th Workshop on Representation Learning for NLP, RepL4NLP 2023 - Proceedings of the Workshop
A2 - Can, Burcu
A2 - Mozes, Maximilian
A2 - Cahyawijaya, Samuel
A2 - Saphra, Naomi
A2 - Kassner, Nora
A2 - Ravfogel, Shauli
A2 - Ravichander, Abhilasha
A2 - Zhao, Chen
A2 - Augenstein, Isabelle
A2 - Rogers, Anna
A2 - Cho, Kyunghyun
A2 - Grefenstette, Edward
A2 - Voita, Lena
PB - Association for Computational Linguistics (ACL)
T2 - 8th Workshop on Representation Learning for NLP, RepL4NLP 2023, co-located with ACL 2023
Y2 - 13 July 2023
ER -