Abstract

Open-sourcing Pupil Reactivity (PuRe) score: The first lighting-invariant measure of pupil reactivity for reliable non-invasive neuro-monitoring
Aleksander Bogucki, Ivo A. John, Michal Swiatek, Hugo Chrost, Michal Wlodarski, Radosław Chrapkiewicz, and Sanjay G. Manohar
Solvemed Inc., Lewes, DE, USA
The pupillary light reflex (PLR) is a significant neurological indicator used routinely in clinical practice, whose sensitivity to ambient light can confound the measurements and reduce their clinical usefulness. To address this problem, we evaluated the PLR across a wide range of ambient light conditions using a smartphone-based pupillometer (AI Pupillometer, Solvemed, Inc.). We study seven PLR parameters known to be affected by ambient illumination. Our study involved 9 healthy subjects and 345 measurements spanning from darkness (<5 lx) to a very bright environment (≲10,000 lx). Strong nonlinear relationships were observed between each PLR parameter and lighting. We developed nonlinear machine learning models to mitigate the lighting effects and effectively correct these PLR parameters, while preserving their clinical meaning. This novel method preserved or improved the ability of pupillometric parameters to discriminate reactive from unreactive pupils while suppressing variability due to lighting changes. From these data, we developed the Pupil Reactivity (PuRe) score [5], which quantifies pupil reactivity on a scale 0–5 (0, non-reactive pupil; 0–3, abnormal/“sluggish” response; 3–5, normal/brisk response). We used the PuRe score to distinguish between reactive and unreactive pupils with high accuracy and stability under varying lighting. These new methods will enable reliable objective pupil testing in pre-hospital and clinical settings. Crucially our methods, data and PuRe score algorithm are shared openly via GitHub for the benefit of the research community and clinicians, promoting transparency and facilitating new data integration and implementation of future enhancements to the algorithm.
Optic disc and haemorrages extraction from retina photography: Preliminary performance from a new YOLOv8-based model
Nicola Rizzieri1 and Luca Dall’Asta2
1Department of Optometry and Vision Science, University of Latvia, Riga, Latvia
2Research and Development, LIFE Srl, Bari, Italy
In recent years, the diffusion of systems based on artificial intelligence in the ophthalmology field has accelerated enormously. Several studies have proposed methods to localize, detect, and diagnose lesions, points of interest, and various ocular pathologies. For example, Santos et al., 2022 Sensors 22 (17) 6441 proposed a method to localize the optic disc from the fundus image automatically. At the same time, Guo et al. 2022 Retina 42 (6) 1095–1102 studied how to highlight lesions such as haemorrhages, microaneurysms and exudates in the retina affected by diabetic retinopathy. We present a method for localizing the optic disc and haemorrhages based on the newest state-of-the-art computer vision model, YOLOv8. After the manual labelling of the training dataset, we performed a series of tests, modifying the size of each trained model and maintaining the same basic input rules. We choose an Intersection over Union (IoU) of 0.5 and draw the precision-recall curves by class type. We calculated the average precision (AP) metric from these curves for each class of interest individually: the optic disc is localized with an AP of 98.2%, and the haemorrhages have an AP of 55.5%. Results are promising, with a mean AP of the entire model of 77%, but future improvements could increase the performance even higher. Easy interaction with YOLOv8 will also allow us to train models for other lesions distinctive of eye diseases such as diabetic retinopathy.
Perceived stereo depth reflects retinal disparities, not 3D geometry
Paul Linton1,2,3 and Nikolaus Kriegeskorte3,4,5,6
1Presidential Scholars in Society and Neuroscience, Center for Science and Society, Columbia University
2Italian Academy for Advanced Studies in America, Columbia University
3Visual Inference Lab, Zuckerman Mind Brain Behavior Institute, Columbia University
4Department of Psychology, Columbia University
5Department of Neuroscience, Columbia University
6Department of Electrical Engineering, Columbia University
We present a new illusion that challenges our traditional understanding of stereo vision. Traditional ‘Triangulation’ accounts of stereo vision back-project from points on the retina to points in the world. This requires that stereo vision incorporates how binocular disparities fall off with the viewing distance squared. By contrast, Linton 2023 Phil Trans R Soc B 378: 20210455 proposes a ‘Minimal Model’ of stereo vision where perceived stereo depth is simply a function (most likely a linear function) of the amount of disparity on the retina. We present a new illusion (the ‘Linton Stereo Illusion’) to adjudicate between these two approaches. The illusion consists of a smaller circle (at 40 cm) in front of a larger circle (at 50 cm), with constant angular sizes throughout. We move the larger circle forward by 10 cm (to 40 cm) and then back again (to 50 cm). The question is, what distance should we move the smaller circle forward and back to maintain a constant perceived separation between the circles? Constant physical distance (10 cm) (‘Triangulation’) or constant disparity (6.7 cm) (‘Minimal Model’)? Observers choose constant disparity. This leads us to four conclusions: First, perceived stereo depth appears to be best captured by the ‘Minimal Model’. Second, doubling disparity appears to double perceived depth, suggesting that perceived stereo depth is proportional to disparity. Third, changes in vergence appear to have no effect on perceived depth. Fourth, stereo ‘depth constancy’ appears to be a cognitive (not perceptual) phenomenon, reflecting our experience of a world distorted in perceived stereo depth.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Presidential Scholars in Society and Neuroscience, Center for Science and Society, Columbia University + Italian Academy for Advanced Studies in America, Columbia University.
Putting things into perspective: Which visual cues facilitate automatic extraretinal symmetry representation?
Elena Karakashevska1, Marco Bertamini2, and Alexis D.J. Makin1
1Institute of Population Health, University of Liverpool
2Department of General Psychology, University of Padova
Objects project different images when viewed from different locations. Our visual system can correct for perspective distortion and identify objects from different viewpoints that alter the retinal image. This study investigated the conditions under which the visual system spends computational resources to construct view-invariant, extraretinal representations of planar symmetry. Given a symmetrical pattern on a plane, symmetry in the retinal image is degraded by perspective. Visual symmetry activates the extrastriate visual cortex and generates an event-related potential (ERP) called sustained posterior negativity (SPN), and previous studies have found that the SPN is reduced for perspective symmetry during secondary tasks. However, this perspective cost might be reduced when additional visual cues support extraretinal representation. One hundred twenty participants viewed symmetrical and asymmetrical stimuli presented in a frontoparallel or perspective view and discriminated luminance. All participants completed four blocks. In the Baseline block there were no cues supporting 3D interpretation. In the Monocular viewing block, participants viewed the same stimuli with one eye. In the Static frame block, additional pictorial depth cues were available – the elements appear within a flat square surface with salient edges. In the Moving frame block, motion parallax enhanced 3D interpretation before stimulus onset. Perspective cost was computed as the difference between the frontoparallel SPN and the perspective SPN. Perspective cost was reduced in all three blocks compared to baseline. We conclude that automatic extra-retinal symmetry representation occurs during luminance discrimination when sufficient depth cues are available.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: ESCR postgraduate grant.
Recognising artificial colour changes
Hamed Karimipour and Christoph Witzel
School of Psychology, University of Southampton
Understanding how the colours of objects and materials (surface colours) shift across illumination changes is important for colour constancy research and its applications in art and industry. Using hyperspectral images of diverse scenes in two experiments, we simulated natural and artificial colour shifts and tested whether human observers could distinguish natural from artificial colour shifts. Unlike artificial colour shifts, the natural ones were generated when both reflectances and illuminants were natural. In Experiment 1, our manipulation was confined solely to illuminants. In Experiment 2, we preserved the previous manipulation of illuminants while also manipulating reflectances. We replaced the natural reflectances of a single object/area with artificial reflectances. Our results indicated that participants showed a small but consistent tendency to identify natural colour shifts in Experiment 1. In comparison, participants recognised natural colour shifts much more reliably in Experiment 2. The key difference between Experiments 1 and 2 was that colour shifts were homogenous across the scene in Experiment 1, while in Experiment 2, objects with artificial reflectances deviated from the overall colour shift. Our findings suggest that colour shifts are perceived as natural when they are homogenous across the whole scene.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Mayflower scholarship of the University of Southampton.
Support for the efficient coding account of visual discomfort
Louise O’Hare1 and Paul Hibbard2
1Psychology, Nottingham Trent University
2School of Psychology, University of Stirling
Sparse coding theories suggest that the visual brain is optimised to encode natural visual stimuli to minimise metabolic cost. It is thought that images that do not have the same statistical properties of natural images are unable to be coded efficiently and result in visual discomfort. Conversely, artworks are thought to be even more efficiently processed compared to natural images and so are aesthetically pleasing. We investigated visual discomfort a range of artificial images, natural scenes and artworks of various genres using a combination of low-level image statistical analysis, mathematical modelling and EEG measures. As shown previously, artificial images with statistical properties very different to natural images were indeed judged more uncomfortable. Importantly, discomfort judgements could be predicted by the overall response of a low-level model of early visual processing, indicative of support for the efficient coding ideas. Moreover, low-level image statistics including edge predictability of detailed images predict discomfort judgements whereas contrast information of low complexity images predicts the SSVEP responses. In conclusion, this study demonstrates that discomfort judgements for a wide set of images can be influenced by contrast and edge information and can be predicted by a model of low-level vision, whilst neural responses measured using EEG are more defined by contrast-based metrics.
Visual processing speed and its association with future dementia development
Ahmet Begde1, Thomas D. W. Wilcockson1, Carol Brayne2, and Eef Hogervorst1
1School of Sport, Exercise and Health Sciences, Loughborough University
2Department of Public Health, University of Cambridge
Visual processing deficits have frequently been reported when studied in individuals with dementia, which suggests their potential utility in supporting dementia screening. The study uses EPIC-Norfolk Prospective Population Cohort Study data (n = 8623) to investigate the role of visual processing speed assessed by the Visual Sensitivity Test (VST) in identifying the risk of future dementia using Cox regression analyses. Individuals with lower scores on the simple and complex VST had a higher probability of a future dementia diagnosis HR 1.39 (95% CI 1.12, 1.67, p < .01) and HR 1.56 (95% CI 1.27, 1.90, p < .01), respectively. Although other more commonly used cognitive dementia screening tests were better predictors of future dementia risk (HR 3.45 for HVLT and HR 2.66, for SF-EMSE), the complex VST showed greater sensitivity to variables frequently associated with dementia risk. Reduced complex visual processing speed is significantly associated with a high likelihood of a future dementia diagnosis and risk/protective factors in this cohort. Combining visual processing tests with other neuropsychological tests could improve the identification of future dementia risk.
Beyond the retina – A review of acute neurological cases presenting with ophthalmological symptoms
Sonali Katti and Sankanika Roy
Department of Neurology, Leicester Royal Infirmary, UHL
There are a spectrum of acute neurological conditions that may initially manifest as visual symptoms, which can be sight-threatening or life-threatening if not promptly treated. Most commonly they either present as a vision defect involving the optic nerve (blurring of vision, visual field defect or acute complete/partial visual loss) or ocular muscle defect (droopy eyelid, ophthalmoplegia). Sudden onset painful, blurry or loss of vision can be optic neuritis, an inflammation of the optic nerve, strongly associated with multiple sclerosis. Ischemic or hemorrhagic strokes involving optic radiations, occipital lobes, or other visual processing areas, causes sudden onset visual field defects, or cortical blindness. Ophthalmologists or optometrists often refer patients to neurology with optic-disc swelling (papilledema), subsequently diagnosed as raised intracranial pressure. Differentials include idiopathic intracranial hypertension, cerebral-venous-sinus-thrombosis or intra-cranial space occupying lesions. If not treated promptly can be sight or life-threatening. Gradual onset ophthalmoplegia or drooping eyelid often indicate myasthenia gravis (MG). MG is an autoimmune disorder affecting neuro-muscular junctions, either isolated ocular or generalized MG. If not treated promptly can be life threatening. Sudden onset ophthalmoplegia often suggests a stroke involving the cranial nerves III, IV, & VI, controlling ocular muscles. History, clinical examination, Magnetic resonance imaging (MRI), cerebrospinal fluid analysis, and neurophysiological assessment play pivotal roles in establishing diagnosis. Treatment modalities encompasses corticosteroids, immunosuppressants, surgical interventions, and targeted therapies tailored to specific etiologies. Emphasis should be placed on promptly recognizing the red-flag signs of neuro-ophthalmologic emergencies and treating the condition in a timely manner.
Smartphone-based neuro-ocular biomarkers for mild traumatic brain injury (mTBI) screening in contact sports
Aleksander Bogucki1, Lukasz Zinkiewicz1, Ivo A. John1, Michal Wlodarski1, Kerry Glendon2, Radosław Chrapkiewicz1, Sanjay G. Manohar1, and Thomas D. W. Wilcockson2
1Solvemed Inc., Lewes, DE, USA
2School of Sport, Exercise, and Health Sciences, Loughborough University, UK
Pupil light reflex (PLR) parameters can be altered by mild traumatic brain injury (mTBI). The advent of smartphone-based quantitative pupillometry has enabled its new potential applications, including in point-of-care settings. Building on this work, this study explores the ability to detect sports-related mTBI using PLR parameters measured with a smartphone-based pupillometer (AI Pupillometer, Solvemed Inc.). Observations on N = 9 mTBI cases indicate altered pupillary dynamics in post-mTBI individuals compared to controls and pre-mTBI individuals. The study's findings suggest significant deviations in the pupillary responses of post-mTBI individuals revealed by applying a linear mixed model that removes the varying ambient light as a confounding factor. The research continues to focus on applying the lighting-invariant PLR parameters and the Pupil Reactivity (PuRe) score to further improve the reliability of smartphone-based pupillary assessments in various environments relevant to sports, for example at the pitch, including in outdoor and indoor settings.
Facilitating transitions: Guiding principles for community-based tool and app development for youth with vision impairment
Aikaterini Tavoulari1, Michael J. Proulx1,2, and Karin Petrini1,2,3
1Psychology Department, University of Bath
2The Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA)
3Bath Institute for the Augmented Human (IAH)
The transition from adolescence to adulthood represents a fundamental step for everyone, however, for young people with vision impairment (VI) there are several added barriers and challenges (Tomlinson and Killingback, 2024 British Journal of Visual Impairment 1–15). Hence, we aimed to establish what tool/application can ease this transition by working closely with young VI people from across the country by using a community-based participatory research (CBPR) approach (Metatla et al., 2015 CoDesign 11 35–48). This ongoing qualitative research was carried out by working alongside 20 young people (divided in 3 smaller working groups of which two met online and one in person) with different VIs and complexity of conditions for four months. The data collection methods included open discussion about the VI priorities during the working group meetings, exchanges of ideas and discussions between meetings through a dedicated forum, and qualitative surveys and reports on specific topics (e.g., events the young people would be interested in and what needs to be considered for these events to be successful, applications they have experienced and like or dislike and why). By placing a strong emphasis on CBPR and the co-researching and co-designing with the end users, this study reported limitations (e.g., personal and social assumptions, cultural disparities, fragmented tools, complex websites, communication barriers) and positive aspects (e.g., guidance from sighted or VI individuals) of existing applications. The goal is to distil valuable information guiding the development of new, accessible, and beneficial applications for young individuals with VI navigating similar challenges.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Guide Dogs for the Blind Association.
Hierarchical colour-coding for optimising automotive display interfaces
Yao Zhou and Dengkai Chen
School of Mechanical Engineering, Northwestern Polytechnical University
Efficient information processing is key when humans interact with intelligent automobiles through touch screens, for example in situations of emergency. This study was aimed at optimising the use of colour to efficiently convey information to the human driver in a Tesla Model 3 touch screen. We divided the interface information into groups according to their task relevance and designed the colour coding hierarchically following the priority of display information, varying from red for most important to green for least important. We compared the hierarchically colour-coded display with the original Tesla display. Observers had to find a target in a visual search task. We recorded accuracy, response times, and eye movements. Our hierarchical colour-coded display resulted in higher accuracy and shorter search times, indicating increased attentional capture by colour. This finding suggests that our approach of colour coding improves information processing, thus enhancing safety and reliability of automotive driving.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: School of Mechanical Engineering, Northwestern Polytechnical University.
