Abstract
With the immense growth in the multimedia contents for education and other purposes, the availability of video contents has also increased. Nevertheless, the retrieval of content is always a challenge. The identification of two video contents based on internal content similarity highly depends on extraction of key frames and that makes the process highly time complex. Recently, many research attempts have tried to approach this problem with the intention to reduce the time complexity using various methods such as video to text conversion and further analysing both extracted text similarity analysis. Regardless to mention, this strategy is again language dependent and criticised for various reasons like local language dependencies and language paraphrase dependencies. Henceforth, this work approaches the problem with a different dimension with reduction possibilities of the video key frames using adaptive similarity. The proposed method analyses the key frames extracted from the library content and from the search video data based on various parameters and reduces the key frames using adaptive similarity. Also, this work uses machine learning and parallel programming algorithms to reduce the time complexity to a greater extend. The final outcome of this work is a reduced time complex algorithm for video data-based search to video content retrieval. The work demonstrates a nearly 50% reduction in the key frame without losing information with nearly 70% reduction in time complexity and 100% accuracy on search results.
Keywords
Get full access to this article
View all access options for this article.
