Abstract
Objective: The creation of a novel 3-dimensional (3D) teaching tool was generated with the employment of the Microsoft Kinect. Within this construct, 3D temporal bone anatomy is manipulated with the use of hand gestures, in the absence of mouse or keyboard.
Method: CT temporal bone data are imported into an image processing program and segmented. This information is then exported in a polygonal mesh format to a proprietary 3D graphics engine with an integrated Microsoft Kinect. Motion in the virtual environment is undertaken by tracking hand position relative to the user’s left shoulder.
Results: The construct successfully tracked scene depth and user joint locations. This permitted gesture-based control over the entire 3D environment. Stereoscopy was deemed appropriate with significant object projection while still maintaining the operator’s ability to resolve the image as singular. Specific anatomic structures can be selected from within the larger virtual environment. These graphic representations can be extracted and rotated at the discretion of the user. Voice command employing the Kinect’s intrinsic speech library works, yet is easily confused by environmental noise.
Conclusion: There is a need for the development of virtual anatomy models to complement traditional education. Initial development is time intensive, and the constructed images are a stylized abstraction. Nonetheless, our novel gesture- controlled interactive 3D model of the temporal bone represents a promising teaching tool.
Get full access to this article
View all access options for this article.
