SONY

Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource NER

Date
2022
Academic Conference
The Annual Meeting of the Association for Computational Linguistics
Authors
Dong-Ho Lee(University of South California)
Akshen Kadakia(University of Southern California)
Kangmin Tan(University of Southern California)
Mahak Agarwal(University of Southern California)
Xinyu Feng(University of Southern California)
Takashi Shibuya(Sony Group Corporation)
Ryosuke Mitani(Sony Group Corporation)
Toshiyuki Sekiya(Sony Group Corporation)
Jay Pujara(University of Southern California)
Xiang Ren(University of Southern California)
Research Areas
AI & Machine Learning

Abstract

Recent advances in prompt-based learning have shown strong results on few-shot text classification by using cloze-style templates. Similar attempts have been made on named entity recognition (NER) which manually design templates to predict entity types for every text span in a sentence. However, such methods may suffer from error propagation induced by entity span detection, high cost due to enumeration of all possible text spans, and omission of inter-dependencies among token labels in a sentence. Here we present a simple demonstration-based learning method for NER, which lets the input be prefaced by task demonstrations for in-context learning. We perform a systematic study on demonstration strategy regarding what to include (entity examples, with or without surrounding context), how to select the examples, and what templates to use. Results on in-domain learning and domain adaptation show that the model's performance in low-resource settings can be largely improved with a suitable demonstration strategy (e.g., a 4-17% improvement on 25 train instances). We also find that good demonstration can save many labeled examples and consistency in demonstration contributes to better performance.

このページの先頭へ