TY - JOUR AB - Neural encoder-decoder models for language generation can be trained to predict words directly from linguistic or non-linguistic inputs. When generating with these so-called end-to-end models, however, the NLG system needs an additional decoding procedure that determines the output sequence, given the infinite search space over potential sequences that could be generated with the given vocabulary. This survey paper provides an overview of the different ways of implementing decoding on top of neural network-based generation models. Research into decoding has become a real trend in the area of neural language generation, and numerous recent papers have shown that the choice of decoding method has a considerable impact on the quality and various linguistic properties of the generation output of a neural NLG system. This survey aims to contribute to a more systematic understanding of decoding methods across different areas of neural NLG. We group the reviewed methods with respect to the broad type of objective that they optimize in the generation of the sequence—likelihood, diversity, and task-specific linguistic constraints or goals—and discuss their respective strengths and weaknesses. DA - 2021 DO - 10.3390/info12090355 KW - neural language generation KW - decoding KW - beam search KW - sampling KW - diversity LA - eng IS - 9 PY - 2021 T2 - Information TI - Decoding Methods in Neural Language Generation: A Survey UR - https://nbn-resolving.org/urn:nbn:de:0070-pub-29570917 Y2 - 2024-11-21T21:20:41 ER -