Abstract
This paper presents our proposed multimodal system that is composed of fusion and fission engines that take into account the contextual information of the user. The fusion engine combines various input modalities to determine the overall context of a given situation. It yields the situation that needs action. This action is subdivided into smaller tasks that are sent to actuators, gadgets and other output modalities for implementation. The process is handled by the fission engine. Our goal is to build a system that helps people with disabilities interact with their ambient intelligent environment using their natural communication skills, such as speech and gesture. Both the fusion and fission engines rely on ontology which is the knowledge base of our system while taking into consideration contextual semantic information. The system design is validated through case simulation using formal specifications using colored Petri nets. This work is our contribution to the ongoing research on robotic application to render services to the handicapped and the elderly.
Get full access to this article
View all access options for this article.
