SNaC: Coherence Error Detection for Narrative Summarization

Paper Abstract

Progress in summarizing long texts is inhibited by the lack of appropriate evaluation frameworks. When a long summary must be produced to appropriately cover the facets of that text, that summary needs to present a coherent narrative to be understandable by a reader, but current automatic and human evaluation methods fail to identify gaps in coherence. In this work, we introduce SNaC, a narrative coherence evaluation framework rooted in fine-grained annotations for long summaries. We develop a taxonomy of coherence errors in generated narrative summaries and collect span-level annotations for 6.6k sentences across 150 book and movie screenplay summaries. Our work provides the first characterization of coherence errors generated by state-of-the-art summarization models and a protocol for eliciting coherence judgments from crowd annotators. Furthermore, we show that the collected annotations allow us to train a strong classifier for automatically localizing coherence errors in generated summaries as well as benchmarking past work in coherence modelling. Finally, our SNaC framework can support future work in long document summarization and coherence evaluation, including improved summarization modelling and post-hoc summary correction.


If you find our work useful, please consider citing our paper:
            title={SNaC: Coherence Error Detection for Narrative Summarization},
            author={Tanya Goyal, Junyi Jessy Li, Greg Durrett},
            journal={arXiv preprint}


If you have any questions, please contact Tanya Goyal: