Abstract
The operational (airborne) Enhanced/Synthetic Vision System will employ a helmet-mounted display with a background synthetic image encompassing a fused inset sensor image. In the present study, three subjects viewed an emulation of a descending flight to a crash site displayed on an SVGA monitor. Independent variables were: 3 fusion algorithms; 3 visibility conditions; 2 sensor conditions; and 9 sensor/synthetic image misregistration conditions. The task was to detect specified terrain features, objects and image anomalies as they became visible in 16 successive fused image snapshots along the flight path. The fusion of synthetic images with corresponding sensor images supported consistent subject performance with the simpler algorithms (averaging and differencing). Performance with the more complex opponent process algorithm was less consistent and more image anomalies were generated. Reductions in synthetic scene resolution did not degrade performance, but elevation source data errors interfered with scene interpretation. These results will be discussed within the context of operational requirements.
Get full access to this article
View all access options for this article.
