A Multi-task Learning Model for Gold-two-mention Co-reference Resolution
- Resource Type
- Conference
- Authors
- Liu, Ruicheng; Chen, Guanyi; Mao, Rui; Cambria, Erik
- Source
- 2023 International Joint Conference on Neural Networks (IJCNN) Neural Networks (IJCNN), 2023 International Joint Conference on. :1-8 Jun, 2023
- Subject
- Components, Circuits, Devices and Systems
Computing and Processing
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Training
Analytical models
Solid modeling
Neural networks
Linguistics
Multitasking
Solids
Co-reference Resolution
Natural Language Processing
Deep Learning
- Language
- ISSN
- 2161-4407
The task of resolving repeated objects in natural languages is known as co-reference resolution. It is an important part of modern natural language processing and semantic cognition as these implicit relationships are particularly difficult in natural language understanding in downstream tasks. Mention identification and mention linking are the two sub-tasks in the general co-reference resolution research community. Gold-two-mention style co-reference resolution is a special type of co-reference resolution that focuses on linking the ambiguous pronoun to one of the two candidate antecedents. In this paper, we proposed a joint learning model that learns mention identification and mention linking tasks together, because we find that the learning of mention identification can provide supportive dependent information for the learning of mention linking. As far as we know, we propose the first model that introduces a multi-task learning framework to the gold-two-mention co-reference resolution task. We find that our proposed model outperforms state-of-the-art baselines and a single-task learning model on three gold-two-mention co-reference resolution datasets. By comparing the errors made by either the single-task learning model or the multi-task learning model, our error analysis also yields interesting findings about in which way our multi-task learning model makes fewer resolution errors.