Please use this identifier to cite or link to this item:
https://etd.cput.ac.za/handle/20.500.11838/3364
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Kabaso, Boniface, Dr | en_US |
dc.contributor.advisor | Mukherjee, A., Mr | en_US |
dc.contributor.author | Tchouya’a Ngoko, Israel Christian | en_US |
dc.date.accessioned | 2022-01-18T10:46:00Z | - |
dc.date.available | 2022-01-18T10:46:00Z | - |
dc.date.issued | 2020 | - |
dc.identifier.uri | http://etd.cput.ac.za/handle/20.500.11838/3364 | - |
dc.description | Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2020 | en_US |
dc.description.abstract | In this new century, the huge amount of data produced daily will remain useless unless we use emerging tools and technologies to make it accessible. There is a need of content summarisers, to reduce manual summarisation which is time consuming and incurs massive costs. Over the recent years sequence-to-sequence learning has attracted more interest. Text summarisation in natural language processing has been limited to extractive methods that select the important sentences of the original text and combine them to form the final summary. The success of end-to-end training of encoder-decoder neural networks in machine translation tasks has developed research using the same architectures in tasks such as paraphrase generation or abstractive text summarisation. Abstractive text summarisation attempts to get the main content of a text and compresses it while keeping its meaning, its semantic and grammatical correctness. It generates dynamic paraphrases and produces natural summaries. It has been recently less attempted and understood. These sequence-to-sequence models founded on Recurrent Neural Networks (RNN) were able to link the input and output data in an encoder-decoder architecture. Further producing good output summaries with the inclusion of attention mechanisms to the RNN layers. Research has shown the good performance of these architectures by using attention mechanisms in machine translation. Abstractive text summarisation using recurrent neural networks with attention mechanisms at sentences has produced better results. It has excelled the recent state-of-the-art model of abstractive text summarisation. However, for longer document summaries, these models often contain grammatical errors. In this investigation we employ a data-controlled approach using recurrent neural networks at paragraph level and train the model end-to-end, to predict the summary for a given text document. We evaluate this model to the DUC 2004 datasets. Our model produces higher quality summaries and obtains 44.44 ROUGE-1 score, 22.50 ROUGE-2 score and 45.15 ROUGE-L score on DUC 2004 datasets. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Cape Peninsula University of Technology | en_US |
dc.subject | Computational linguistics | en_US |
dc.subject | Semantic computing | en_US |
dc.subject | Automatic abstracting | en_US |
dc.subject | Neural networks (Computer science) | en_US |
dc.title | Abstractive text summarisation using recurrent neural networks at the paragraph level | en_US |
dc.type | Thesis | en_US |
Appears in Collections: | Information Technology - Master's Degree |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Tchouya'a_Ngoko_Israel_212014269.pdf | 1.75 MB | Adobe PDF | View/Open |
Page view(s)
266
Last Week
2
2
Last month
10
10
checked on Nov 17, 2024
Download(s)
210
checked on Nov 17, 2024
Google ScholarTM
Check
Items in Digital Knowledge are protected by copyright, with all rights reserved, unless otherwise indicated.