Abstract
A crucial aspect of creating character images in digital media is ensuring that animated characters’ details and distinctive features are accurately aligned. Additionally, facial animation contour feature matching and seamless splicing are essential for conveying precise facial expressions, making them key technologies in the creation of animated characters. Traditional facial motion capture technology makes it difficult for the audience to experience the character’s expressions. Low matching accuracy, low stitching efficiency, and severe image distortion after stitching are just a few of the issues that need to be addressed. An animation-based face contour feature matching method is proposed in this paper to address the issues raised above. Training a neural network with images of faces and animation characters is the first step in matching the datasets. Then, a Gaussian low-pass filter is used to scale and denoise the complex animation image in the scene. By using the double threshold method, a smooth image’s contour is detected, and the contour feature is extracted and a matching relationship is built. Finally, the facial key point bone parameters are used in conjunction with the face geometry correction. The best matching pair of faces is selected for image contour stitching based on a qualitative analysis of character expressions. The results of the simulation show that it has better feature matching accuracy, shorter running time, and stronger anti-distortion ability than the traditional stitching method, which greatly helps to improve the efficiency of animation manufacturing and increase the output.
Get full access to this article
View all access options for this article.
