Abstract
Despite artificial intelligence’s far-reaching influence in the financial reporting and other business domains, there is a surprising dearth of accessible descriptions about the assumptions underlying the software’s development along with an absence of empirical evidence assessing the viability and usefulness of this communication tool. With these observations in mind, the purposes of this study are to explain how automated text summarization applications work from an overarching, semitechnical, modestly theoretical perspective and, using ROUGE-1 (Recall-Oriented Understudy for Gisting Evaluation–1) evaluation metrics, assess how effective the summarization software is when summarizing complex business reports. The results of this study show that the extraction-based summarization system produced moderately satisfactory results in terms of extracting relevant instances of the text from the business reports. Much work still needs to be accomplished in the area of precision and recall in extraction-based systems before the software can match a human’s ability to capture the gist of a body of text.
Get full access to this article
View all access options for this article.
