[By: Clément Dardenne]
In the last couple of months, I worked on the development of a new version of our eye tracking software, adapted to the head morphology and eyes anatomy of infants (6 months – 3 years old).
In order to validate the new algorithms, the feedbacks of researchers involved in infant studies are primordial. On top of that, we need to enhance the size of our database (i.e. infant recordings), to keep improving our algorithms. The best way to achieve these goals is to provide a Smart Eye system to partners Babylab in the MOTION project, to see if it’s meeting the researchers expectations, and also because they have the possibility to involve a lot of infants in their studies.
That’s why Smart Eye decided to lend for free one of its systems for a period of 6 months to Lancaster University Babylab. The system is composed of 3 cameras and is accompanied of an Alpha version of a software adapted to infants. The system will be used to carry out a study on peripheral vision on 6 months old infants.
The experiment consists of displaying specific stimuli at different positions on a large curved screen and see if the kid is seeing them. The idea is to understand “how far” the infant can see in his peripheral view, by monitoring his gaze, his head orientation, saccades, and time of fixation.
From Monday 25th March to Friday 5th April, I have been sent by my company to Lancaster University, to set up the system, train researchers to use the software, and conduct a pilot test with several infants.
Installation & test of the Smart Eye system in Lancaster
During the pilot test, 4 babies aged 6 months have been recorded. It helped us to fix a lot of issues related to the course of the experiment. However, we still have some difficulties to conserve the attention of the infant on a long period of time, the stimuli are not appealing enough, and thus, infants are not looking at them. It’s something we will need to improve in the future, probably by adding more attention getter and funny noises.
The agreement signed with parents doesn’t allow us to share images of infant recordings publicly. But I can say we reach a level of precision for the head tracking on infants similar to the adult version :
Detection of the face in Smart Eye Pro
However, it’s difficult to do any advanced conclusion on the gaze tracking so far, since infants were not interested by the stimuli displayed on the screen, as previously stated. Once this issue is fixed, we should be able to carry out the data analysis of the gaze, related to the apparition of the stimuli.
Representation of the head orientation (green circle + coordinate system), gaze (red ray) and the curved screen in Smart Eye software. Intersection areas are defined on the screen, and allow to see which area of the screen the infant is looking at (enlightened in red).
Finally, I wanted to thank the researchers and technicians for their warm welcome and help during my stay during my stay at Lancaster University.