Reinforced Extractive Summarization With Question-Focused Rewards
Abstract
We investigate a new training paradigm for extractive summarization. Traditionally, human abstracts are used to derive goldstandard labels for extraction units. However, the labels are often inaccurate, because human abstracts and source documents cannot be easily aligned at the word level. In this paper we convert human abstracts to a set of Cloze-style comprehension questions. System summaries are encouraged to preserve salient source content useful for answering questions and share common words with the abstracts. We use reinforcement learning to explore the space of possible extractive summaries and introduce a question-focused reward function to promote concise, fluent, and informative summaries. Our experiments show that the proposed method is effective. It surpasses state-of-the-art systems on the standard summarization dataset.
Publication Date
1-1-2018
Publication Title
ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Student Research Workshop
Number of Pages
105-111
Document Type
Article; Proceedings Paper
Personal Identifier
scopus
DOI Link
https://doi.org/10.18653/v1/p18-3015
Copyright Status
Unknown
Socpus ID
85063108636 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/85063108636
STARS Citation
Arumae, Kristjan and Liu, Fei, "Reinforced Extractive Summarization With Question-Focused Rewards" (2018). Scopus Export 2015-2019. 8864.
https://stars.library.ucf.edu/scopus2015/8864