Abstract
We introduce a new method of sign language subtitling aimed at young deaf children who have not acquired reading skills yet, and can communicate only via signs. The method is based on: 1) the recently developed concept of “semantroid™” (an animated 3D avatar limited to head and hands); 2) the design, development, and psychophysical evaluation of a highly comprehensible model of the semantroid; and 3) the implementation of a new multi-window, scrolling captioning technique. Based on “semantic intensity” estimates, we have enhanced the comprehensibility of the semantroid by: i) the use of non-photorealistic rendering (NPR); and ii) the creation of a 3D face model with distinctive features. We have then validated the comprehensibility of the semantroid through a series of tests on human subjects which assessed accuracy and speed of recognition of facial stimuli and hand gestures as a function of mode of representation and facial geometry. Test results show that, in the context of sign language subtitling (i.e., in limited space), the most comprehensible semantroid model is a toon-rendered model with distinctive facial features. Because of its enhanced comprehensibility, this type of semantroid can be scaled to fit in a very small area, and thus it is possible to display multiple captioning windows simultaneously. The concurrent display of several progressive animated signed sentences allows for review of information, a feature not present in any sign language subtitling method presented so far. As an example of application, we have applied the multi-window, scrolling captioning technique to a children's video of a chemistry experiment.
Get full access to this article
View all access options for this article.
