Abstract
This paper presents a proposed methodology for face recognition based on an information theory approach to coding and decoding face images. In this paper, we propose a 2D-3D face-matching method based on a principal component analysis (PCA) algorithm using canonical correlation analysis (CCA) to learn the mapping between a 2D face image and 3D face data. This method makes it possible to match a 2D face image with enrolled 3D face data. Our proposed fusion algorithm is based on the PCA method, which is applied to extract base features. PCA feature-level fusion requires the extraction of different features from the source data before features are merged together. Experimental results on the TEXAS face image database have shown that the classification and recognition results based on the modified CCA-PCA method are superior to those based on the CCA method. Testing the 2D-3D face match results gave a recognition rate for the CCA method of a quite poor 55% while the modified CCA method based on PCA-level fusion achieved a very good recognition score of 85%.
1. Introduction
The human face is a primary focus of attention in society, playing a major role in conveying identity and emotion. Face recognition is a technology which recognizes a person by his/her face image. The problem of human face recognition is complex and highly challenging because it has to deal with a variety of parameters, including illumination, pose orientation, expression, head size, image obscuring and face background [1]. Most of the proposed methods for face recognition deal with 2D appearance, such as eigenfaces[2] and fisherfaces[3]; naturally, such methods are the most sensitive to varying illumination and pose. In recent years, new methods have been introduced in image processing and 2D face recognition which are not independent of these factors (illumination, pose orientation, expression). CCA and kernel CCA (KCCA) belong to these methods. CCA has been widely used in several fields, such as signal processing [4], medical studies and pattern recognition [5]. CCA can simultaneously deal with two sets of data, in contrast to PCA and linear discriminant analysis (LDA). KCCA has also been successfully applied to content-based retrieval, text mining and facial expression recognition. Recently, kernel methods have attracted much attention due to their good nonlinear properties [6]. Since the shape of a face is independent of its illumination and pose, 3D face recognition has the potential to improve performance under uncontrolled conditions. Some 3D face recognition algorithms have already been proposed and very high recognition rates have been reported. 3D technology is still not used widely in practical applications due to its computational complexity and the cost of equipment.
This paper employs a new feature projection approach based on the CCADouble method and combined CCA-PCA methods using image fusion. We apply CCA and modifications of CCA for mapping between 2D and 3D face data. We used the 2D face image as a probe and the 3D face data as a gallery. The proposed 2D-3D face recognition approach demonstrates its lower computational complexity in comparison with other conventional 3D-3D face recognition systems [7].
The rest of the paper is organized as follows. In Section 2, the related work is briefly reviewed. The face recognition system and the proposed methodology are discussed in Section 3 and Section 4. The experimental results are listed in Section 5 and, finally, we summarize this paper in Section 6.
2. Related Work
There are several methods to tackling the 2D-3D face-matching problem. Rama et al. [8] use partial PCA (P2CA) to match a 2D face image (probe) with 3D face data (gallery). In [9], D. Riccio et al. propose a particular 2D-3D face recognition method based on 16 geometric invariants, which are calculated from a number of ‘control points’. In [10], W. Yang et al. propose a pixel-level image fusion scheme based on PCA and other methods. The published results on multimodal 2D-3D face recognition have shown that the recognition of faces from 2D and 3D facial data results in better performance when compared to using solely 2D or 3D data. Recent surveys on multi-modal 2D-3D face recognition approaches can be found in contributions by Bowyer et al. [11] and Gao et al. [12]. Inspired by the results presented in the papers listed above, we propose a modified 2D-3D fusion algorithm based on PCA.
In [13], G. Kukharev et al. propose a 2DCCA (2DCCA) applied to image processing and biometrics. In general, the key idea behind the CCA approach is to find a high-dimensional relationship between two sets of variables with a few pairs of canonical variables. It was intended to describe relations between two sets of 1D data sequences [14], [15]. For multi-modal features, CCA can be used for feature-level fusion. The rapid development of computer technologies has increased memory capacity and processing speed, and the worldwide use of software packages for digital image processing (e.g., MATLAB, Lab View, Statistics) and applications of mathematical modelling have promoted the application of CCA in the processing of multidimensional data. Such data may include face images, hand gestures, etc. [16], [17].
3. Face Recognition System
One of the most challenging problems that face recognition must deal with is the appropriate separation of data that belongs to the same class. In face recognition, a class represents all the data of the same subject (data of all the images of the same person). The goal is to implement an automated machine-supported system that recognizes a person's identity in images (after initialization and training by a representative sample of images). This can have various practical applications, such as automated person identification and the recognition of race, gender, emotions, etc. [18].
The procedure for the face recognition system is as follows. First, the features of images are extracted. Second, the classifier is trained on a training set of images and models for various classes are generated. Finally, these classification models are used to predict test images (see Figure 1). Common transform methods are listed in the middle column of Figure 1 (PCA, CCA, KCCA and CCA-PCA).

Methods of face recognition systems
3.1 Overview of PCA, CCA and KCCA
PCA [18] is a standard technique for dimensionality-reduction and has been applied to a broad class of computer vision problems, including feature selection, object recognition and face recognition.
CCA is a suitable and dominant technique which can be used for exploring the relationships among multiple dependent and independent variables [19], [20]. Therefore, a powerful feature projection approach for facial images-which is based on CCA-is proposed. CCA recognizes and measures the relationship between two sets of variables [20]. Compared to other projection approaches like PCA and LDA, CCA can concurrently deal with two sets of data. For two sets of multivariate data,
where by
where the eigenvalues
Because of the nonlinear relationship between 2D faceimages and 3D face data, the CCA may not extract useful-descriptors of the data. Therefore, here, we also introduce the KCCA [22], which offers an alternative solution to this draw back. KCCA projects data into a higher dimensional feature space prior to performing the CCA. In other words, KCCA is a nonlinear variant of CCA, the goal of which is to identify and quantify the association between two sets of variables. The kernels are methods of implicitly mapping data into a higher (even infinite) dimensional feature space
After the kernel process, one can implement the CCA calculation in the mapping space. KCCA is accomplished in finding common semantic features between different views of data. The input data and structure of the KCCA calculation are schematically presented in Figure 2.

The principles of the KCCA algorithm for 2D-3D face recognition
4. Proposed Method
In this section, we present the proposed method that fuses two approaches to face recognition, namely the CCA Double and CCA-PCA algorithms. PCA is used for finding the ortho normal decor relation axes, which are the eigenvectors of the covariance matrix. The objective of the proposed method is to recognize a 2D object containing a human face. The proposed integrated system follows previous work on face recognition [25], but new contributions in our approach along with improvements to the existing methods result in significant performance gain.
A scenario of a face recognition process is illustrated in Figure 3. In this figure, the gallery denotes a set of known individuals. The images used for testing the algorithms are called ‘probes’. A probe is either a new image of an individual in the gallery or else an image of an individual, not presented in the gallery. The recognition algorithm returns the best match between each probe and image in the gallery. The estimated identity of the probe is the best match.

A simple example of a face recognition system [26]
In the proposed method, a training phase and a testing phase are divided into the following stages:
Training phase:
Training images are selected and placed in the folder. The training images are read. The PCA method is trained and tuned for the testing phase.
Testing phase:
The test images are read. PCA is applied first. The feature vectors are identified using the Euclidean distance function on the training data. Using the Kernel trick method, the distance matrix is converted into the kernel matrix (it can be applied to multiclass CCA-PCA for classification). The classified object and the label are displayed.
In the training phase, an eigenspaceis established from the training samples using PCA and the training face images are mapped to the eigenspacefor classification. In the classification phase, an input face is projected to the same eigenspaceand classified by an appropriate classifier. Let the training set of face images be
where
This set of very large vectors is then subject to PCA in the training phase. The training set contains 2D-3D pairs of 50 subjects, and the test set contains 2D-3D pairs of the other 25, 50, 75 and 115 subjects. The training is performed with

Training and testing phases of the CCA-PCA algorithm
4.1 2D-3D Image Fusion using CCA-PCA
The purpose of image fusion is to integrate the complementary information scattered in the source images to generate one composite image which contains a more accurate description of the scene than any of the individual source images. Image fusion is a useful technique for image analysis and computer vision, which can reduce errors in the detection and recognition of objects.
In this subsection, the, fusion of images obtained from 2D images and 3D data is explained. At the image-level, the fusions of the modalities of 2D and 3D images are seen as random variables and pixel values provide observations for these variables. PCA is performed with the aim of reducing a large set of variables to a smaller set that still preserves most of the information from the original set. It is obvious that such a reduced set is more convenient for analysis and interpretation. The main goal of CCA is to find two sets of basis vectors-one for each region-such that the correlations between the projections of variables onto them are mutually maximized. Each set of vectors corresponds to one face region. In other words, the basic vectors obtained by CCA can transform the vectors of two corresponding regions into a unified subspace. The CCA-PCA has also been used extensively in image compression and classification.
Let a 2D image vector (
As shown in Figure 5, the fused images are composed by facial cues from the 2D and 3D images. The fused images represent equally predominant facilities (features) of 2D and 3D images as well as additional prominent structurecues.

The principles of the fusion algorithm for 2D-3D image fusion

The samples of the original 2D and 3D images and fused images
The principle behind the proposed CCA-PCA fusion algorithm is shown in Figure 7. The proposed CCA-PCA algorithm for 2D-3D face recognition uses as input 2D or 3D images as probes and 2D-3D fused images as a test database. First, the test set is formedfrom 25 subjects. The 2D or 3D face images are selected as a reference and represented by a vector X. Next, each 2D-3D fused image from the test set is transformed into the corresponding vector

The principles of the CCA-PCA algorithm based on 2D-3D image fusion
The fused image contains information from both 2D and 3D face images. The canonical correlation coefficient

The correct 2D-3D face recognition match using the fusion algorithm
4.2 CCADouble
In this section, experimental work on 2D-3D face recognition using the proposed CCADouble algorithm is presented. The CCADouble approach utilizes CCA twice-separately on 2D and 3D face image data-as is shown in Figure 9. In this case, a 2D-3D reference pair of images is led to the algorithm as a probe. 25 subjects from a 2D-3D face database are selected as a test set. A randomly selected pair of 2D-3D faces, represented by the vectors

The principles of the CCADoublealgorithm for 2D-3D face recognition
The output of CCADouble is a pair of canonical correlation coefficients
Thus, the resulting score of the correlation coefficient is taken with the aim of identifying the relevant 2D-3D image pairs. The 2D-3D image pair with the greatest canonical correlation coefficient is classified as the correctly verified 2D-3D pair. In the next step, the number of subjects in the face database was increased to 50, 75 and 115, and the recognition experiment was performed again for each of the values. The overall 2D-3D recognition experimental results are shown in Table 1.
Face recognition rate for different numbers of subjects
5. Experiments and Results
In this section, we evaluate the performance of the proposed algorithms for 2D-3D face recognition. The proposed method was implemented in the MATLAB environment.
5.1 Face Dataset
The experiments were performed on the TEXAS face database [27], which was developed at the Laboratory for Image and Video Engineering of The University of Texas in Austin. The TEXAS face database contains pairs of 2D face images and 3D face data for 115faces. The images are sized 751 × 501 pixels. Each value in z-dimensions is represented in 8-bit format, with the highest value of 255 assigned to the tip of the nose and a value of 0 assigned to the background. The faces have neutral and expressive modes. An example of some of the face images from the TEXAS face database is given in Figure 10.

Some face images from the TEXAS database [27]
5.2 Experimental Part
First, the proposed CCADouble algorithm for 2D-3D face recognition was tested. A 2D face image was chosen as a probe and 3D face data as a test image. The procedure for 2D-3D face recognition based on the CCADouble algorithm is explained in Section 4.2 and the process is shown in Figure 9. The procedure for face recognition is similar to the previous experiment based on the common CCA method, but a difference lies in use of the core algorithm for 2D-3D face recognition. The input 2D face image is selected as a reference and is represented by a vector X. Next, each 3D face image from the test set (25 subjects) is transformed into the corresponding vector
In the second part of our experiments, the proposed CCA-PCA method for 2D-3D face recognition was applied to the same TEXAS face database. The conventional CCA algorithm is explained in Section 3. Here, a 2D face image has been taken as a probe and 3D face data forms a test database. The test set has been built from the first 25 subjects of the TEXAS face database. A random 2D face image represented by a vector

The correct 2D-3D face recognition based on the common CCA
The overall recognition results for different numbers of subjects in the test set are shown in Table 1.
We compared face recognition results obtained on the TEXAS face database for different proposed tested algorithms based on the CCA method. As can be seen in Table 1 and Figure 12, the best 2D-3D face match results were obtained using the proposed CCADouble and CCA-PCA fusion algorithms. The 2D-3D face recognition based on the other methods (the common CCA approach and the KCCA approach) give a comparative recognition rate (above 60%) for only a small dataset. For a greater number of input 2D-3D test images in the test set, the recognition rate is much lower (above 35-45%) in contrast to the proposed CCA-PCA and CCADouble algorithms, which obtained recognition rates above 80% for the entire TEXAS database (only three and four subjects were incorrectly recognizedby the CCA-PCA and CCADouble algorithms, respectively). The performance of the proposed CCA-based algorithm is very satisfactory and applicable to future work.

Face recognition success for different numbers of subjects
6. Conclusion
In this paper, the different 2D-3D face recognition approaches using the CCA and KCCA methods and a new proposed approach based on a CCA-PCA fusion method are presented. The CCA and KCCA are popular, powerful, multivariate analysis methods and feature extraction techniques, and are widely used in face recognition. We have introduced powerful techniques for 2D-3D images, which have been evaluated on the TEXAS face database. With the aim of achieving better performance, we proposed variations of the CCA approach for 2D-3D face recognition. The approaches based on common CCA and KCCA achieve relatively low levels of face recognition. On the other hand, the approaches of CCADouble and CCA-PCA fusion achieve relatively higher recognition rates. These approaches are particularly usable in image processing and 2D-3D face recognition because only a few matrix multiplications are needed in the recognition process. The recognition rate using our proposed CCA-PCA approach is very satisfactory and thus applicable to future work. Moreover, in the future it might be combined with other methods and thus improve even more in terms of robustness and accuracy.
7. Acknowledgements
This work was supported by the Slovak Science Project Grant Agency, Project No. 1/0705/13 “Image elements classification for semantic image description” and by the project “Competence Centre for research and development in the field of diagnostics and therapy of oncological diseases”, ITMS: 26220220153, co-funded from EU sources and the European Regional Development Fund.
