Abstract
Due to the fast growth of multimedia archives, the semantic gap is becoming a vital problem between machine learning based semantic concepts and local features of the image to retrieve images accurately. To address this issue, the proposed method of this article introduces two novel methods for effective image retrieval known as visual words integration after clustering (VWIaC) and feature integration before clustering (FIbC). These methods use complementary features of histograms of oriented gradients (HOG) and oriented FAST and rotated BRIEF (ORB) descriptors founded on the bag-of-words (BoW) model for salient objects within the images to build smaller and larger sizes of codebooks. To achieve higher efficiency in terms of specificity of the image retrieval system, the codebook of larger sizes are preferred, while larger sizes codebook produces low sensitivity and vice versa. The proposed method of VWIaC produces two smaller sizes codebooks to achieve higher sensitivity. After that visual words of both smaller size codebooks are integrated to produce larger size codebook, which improves the specificity of the proposed method. The performance of the proposed method is tested on three standard image benchmarks, which verifies its vigorous performance as compared to an FIbC method and recent CBIR methods.
Keywords
Get full access to this article
View all access options for this article.
