Knowledge-Rich Event Coreference Resolution

Date

2021-05-04

Authors

ORCID

Journal Title

Journal ISSN

Volume Title

Publisher

item.page.doi

Abstract

Information extraction, a key area of research in Natural Language Processing (NLP), concerns the extraction of structured information from natural language documents. Recent years have seen a gradual shift of focus from entity-based tasks to event-based tasks in information extraction research. Being a core event-based task, event coreference resolution, the task of determining which event mentions in a document refer to the same real-world event, is generally considered one of the most challenging tasks in NLP. More specifically, for two event mentions to be coreferent, both their triggers (i.e., the words realizing the occurrence of events) and their corresponding arguments (e.g., time, places, and people involved in them) have to be compatible. However, identifying potential arguments (which is typically performed by an entity extraction system), linking arguments to their event mentions (which is typically performed by an event extraction system), and determining the compatibility between two event arguments (which is provided by an entity coreference resolver), are all non-trivial tasks. In other words, end-to-end event coreference resolution is complicated in part by the fact that an event coreference resolver has to rely on the noisy outputs produced by its upstream components in the standard information extraction pipeline. Many existing event coreference resolvers avoid the hassle of dealing with noisy information and simply adopt a knowledge-lean approach consisting of a pipeline of two components, a trigger detection component that identifies triggers and corresponding subtypes, followed by an event coreference component. We hypothesize that knowledge-lean approaches are not the right way to go if the ultimate goal is to take event coreference resolvers to the next level of performance. With this in mind, we investigate knowledge-rich approaches in which we derive potentially useful knowledge for event coreference resolution from a variety of sources, including models that are trained on tasks that we believe are closely related to event coreference, statistical and linguistic features that are directly relevant to the prediction of event coreference links, as well as constraints that encode commonsense knowledge of when two event mentions should or should not be coreferent. We start by designing a multi-pass sieve approach that first resolves easy coreference links and then exploits these easy-to-identify coreference links as a source of knowledge to identify difficult coreference links. We then investigate two types of joint models for event coreference resolution, including a joint inference model and a joint learning model, where we encode commonsense knowledge of the inter-dependencies between the various components via hard or soft constraints. In addition, we incorporate non-local information extracted from the broader context preceding an event mention via learning a supervised topic model and modeling discourse salience. Further, we present an unsupervised method for deriving argument compatibility information from a large, unannotated corpus, and develop a transfer-learning framework that transfers the resulting argument (in)compatibility knowledge to an event coreference resolution resolver. Finally, we investigate a multi-tasking neural model that involves simultaneously learning six tasks related to event coreference, and guide the model learning process using cross-task consistency constraints.

Description

Keywords

Natural language processing (Computer science), Information retrieval, Transfer learning (Machine learning)

item.page.sponsorship

Rights

Citation