PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization

Published in Thirty-seventh International Conference on Machine Learning (ICML 2020), 2020

Paper Link: arXiv

Code is available on Github: https://github.com/google-research/pegasus

Post by Google AI Blog: https://ai.googleblog.com/2020/06/pegasus-state-of-art-model-for.html

Abstract

Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization. However, pre-training objectives tailored for abstractive text summarization have not been explored. Furthermore there is a lack of systematic evaluation across diverse domains. In this work, we propose pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised objective. In PEGASUS, important sentences are removed/masked from an input document and are generated together as one output sequence from the remaining sentences, similar to an extractive summary. We evaluated our best PEGASUS model on 12 downstream summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills. Experiments demonstrate it achieves state-of-the-art performance on all 12 downstream datasets measured by ROUGE scores. Our model also shows surprising performance on low-resource summarization, surpassing previous state-of-the-art results on 6 datasets with only 1000 examples.

State-of-the-art Results on 12 Datasets

DatasetC4HugeNewsMixed & Dynamic
xsum45.20/22.06/36.9947.21/24.56/39.2547.60/24.83/39.64
cnn_dailymail43.90/21.20/40.7644.17/21.47/41.1144.16/21.56/41.30
newsroom45.07/33.39/41.2845.15/33.51/41.3345.98/34.20/42.18
multi_news46.74/17.95/24.2647.52/18.72/24.9147.65/18.75/24.95
gigaword38.75/19.96/36.1439.12/19.86/36.2439.65/20.47/36.76
wikihow43.07/19.70/34.7941.35/18.51/33.4246.39/22.12/38.41 *
reddit_tifu26.54/8.94/21.6426.63/9.01/21.6027.99/9.81/22.94
big_patent53.63/33.16/42.2553.41/32.89/42.0752.29/33.08/41.66 *
arxiv44.70/17.27/25.8044.67/17.18/25.7344.21/16.95/25.67
pubmed45.49/19.90/27.6945.09/19.56/27.4245.97/20.15/28.25
aeslc37.69/21.85/36.8437.40/21.22/36.4537.68/21.25/36.51
billsum57.20/39.56/45.8057.31/40.19/45.8259.67/41.58/47.59

The “Mixed & Dynamic” model has the following changes:

  • Trained on both C4 and HugeNews (dataset mixsure is weighted by their number of examples).
  • Trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
  • The model dynamicly choose 15%-45% important sentences to generate
  • Importance sentences are sampled instead of using a fixed strategy (This is done by adding a 20% noise to importance scores.)
  • The sentencepiece tokenizer is updated to be able to encode newline character.

(*) The numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:

  • Wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model’s sentencepiece tokenizer doesn’t encode newline and loose this information.
  • We update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.

Citation

@misc{zhang2019pegasus,
    title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
    author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
    year={2019},
    eprint={1912.08777},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}