Abstract
Recent investigations on human-computer interaction (HCI) have incorporated users' behavior and intension into interface design. Automatic facial expression analysis can indicate a new modality for the HCI field. Thus, automatic recognition system of facial expression has become increasingly significant in recent years. This study reveals the advantages of the proposed mixed-feature model and presents the capability of identifying human facial expressions from static images. The subsequent framework is a multistage discrimination model based on global appearance features extracted from two-dimensional principal component analysis (2DPCA), and local texture represented by local binary pattern (LBP). Moreover, the weighted combination of 2DPCA and LBP features is input to the decision directed acyclic graph (DDAG) based support vector machine (SVM) classifier, and performs identification among several prototypic facial expressions. Extensive experiments are performed using the four benchmark databases most commonly cited in the literature: Yale, JAFFE, NimStim and Cohn-Kanade. The experimental results indicate that the proposed mixed-feature model is feasible and outperforms the single-feature model. Analytical results of this study reveal that the proposed method is more accurate than other alternative schemes in the same database.
Get full access to this article
View all access options for this article.
