Abstract
Smartphone-based periocular recognition (SPR) has gained significant attention because of the limitations of face and iris biometric modalities. For this problem, most of the existing methods employ hand-crafted features. On the other hand, deep convolutional neural networks (CNN), which learn features automatically, have shown outstanding performance for many visual recognition tasks over hand-crafted features. In view of this paradigm shift, we propose an SPR method based on CNN model. A CNN model needs a huge volume of data, but for periocular recognition problem only limited amount of data is available. One solution for this issue is to use a CNN model pre-trained on the dataset from a related domain, but this raises the questions of how to extract discriminative features from a pre-trained CNN model and classify them. We introduce a simple, efficient and compact image representation method based on a pre-trained CNN model (VGG-Net). This method employs the wealth of information and sparsity existing in the activations of convolutional layers of a CNN model. For recognition, we use an efficient and robust Sparse Augmented Collaborative Representation based Classification (SA-CRC) technique. For a thorough evaluation of ConvSRC (the proposed system), experiments were carried out on the VISOB database, which was presented as the challenge dataset in ICIP2016. The results show the superiority of ConvSRC over the state-of-the-art methods; it obtains a GMR of more than 99% at FMR = 10-3 and outperforms the first winner of the ICIP2016 challenge by 10%.
Keywords
Get full access to this article
View all access options for this article.
