Natural language generation (NLG) is a subfield of natural language processing (NLP) that is often characterized as the study of automatically converting non-linguistic representations (e.g., from databases or other knowledge sources) into coherent natural language text. In recent years the field has evolved substantially. Perhaps the most important new development is the current emphasis on data-oriented methods and empirical evaluation. Progress in related areas such as machine translation, dialogue system design and automatic text summarization and the resulting awareness of the importance of language generation, the increasing availability of suitable corpora in recent years, and the organization of shared tasks for NLG, where different teams of researchers develop and evaluate their algorithms on a shared, held out data set have had a considerable impact on the field, and this book offers the first comprehensive overview of recent empirically oriented NLG research.The pseudo-code for the Chu-Liu Edmonds algorithm. Our adaptation includes the use of dependency edges which encode linear precedence. finds a rooted directed spanning tree, specified by TWl which is an acyclic set of edges ... The first stage finds, for each vertex, v Ap Vw, the best edge connecting it to another vertex.
|Title||:||Empirical Methods in Natural Language Generation|
|Author||:||Emiel Krahmer, Mariet Theune|
|Publisher||:||Springer Science & Business Media - 2010-09-09|