Abstract
In this paper, we propose a unified framework for an abstractive summarization method which uses the prompt language model and a pointer mechanism. The abstractive summarization problem usually includes a text encoder and a text decoder. Current methods usually employ an encoder-decoder architecture to condense and paraphrase a document. To better paraphrase a document, we propose a unified framework for an abstractive summarization model that only uses a topic-sensitive decoder. Our model has a prompt input module, a text decoder and a pointer mechanism. We apply our model to Xsum, Gigaword, and CNN/DailyMail summarization datasets, and experimental results demonstrate that our model has achieved state-of-the-art results on the Xsum dataset and comparable results on the other two datasets.
Get full access to this article
View all access options for this article.
