The aim of this talk series is to support early-career researchers and underrepresented groups by providing a platform for their work and increasing networking opportunities.

These talks are open to all and will be made publicly available where possible. Drop me a line if you are interested in joining.


PREVIOUS TALKS

21.10.2020 – Age-Related Differences in the Relationship Between Eye Movement Behaviour and Facial Recognition Memory | Negar Mazloum-Farzaghi (University of Toronto & Rotman Research Institute)

12.11.2020 – Evidence for atypical semantic visual salience in Super Recognizers | Marcel Linka (University of Giessen)

20.11.2020 – The intersection of perceptual decision-making, reading, consumer psychology and dyslexia – a multi-method approach | Léon Franzen (Concordia University)

26.11.2020 – Faces are made of this… Sensitivity to Orientation when processing faces | Valerie Goffaux (Université catholique de Louvain)

01.12.2020 – Is it Autism or Alexithymia? explaining atypical socioemotional processing | Hélio Clemente Cuve (University of Oxford)


UPCOMING TALKS

10.12.2020 | 16h CET – Global visual salience of competing stimuli

SpeakerAlex Hernandez-Garcia (Université de Montréal)

Abstract: Current computational models of visual salience accurately predict the distribution of fixations on isolated visual stimuli. It is not known, however, whether the global salience of a stimulus, that is its effectiveness in the competition for attention with other stimuli, is a function of the local salience or an independent measure. Further, do task and familiarity with the competing images influence eye movements? In this talk, I will present the analysis of a computational model of the global salience of natural images. We trained a machine learning algorithm to learn the direction of the first saccade of participants who freely observed pairs of images. The pairs balanced the combinations of new and already seen images, as well as task and task-free trials. The coefficients of the model provided a reliable measure of the likelihood of each image to attract the first fixation when seen next to another image, that is their global salience. For example, images of close-up faces and images containing humans were consistently looked first and were assigned higher global salience. Interestingly, we found that global salience cannot be explained by the feature-driven local salience of images, the influence of task and familiarity was rather small and we reproduced the previously reported left-sided bias. This computational model of global salience allows to analyse multiple other aspects of human visual perception of competing stimuli. In the talk, I will also present our latest results from analysing the saccadic reaction time as a function of the global salience of the pair of images.

07.01.2021 | 17h CET – Reproducible EEG from raw data to publication figures

SpeakerCyril Pernet (University of Edinburgh)

Abstract: In this talk I will present recent developments in data sharing, organization, and analyses that allow to build fully reproducible workflows. First, I will present the Brain Imaging Data structure and discuss how this allows to build workflows, showing some new tools to read/import/create studies from EEG data structured that way. Second, I will present several newly developed tools for reproducible pre-processing and statistical analyses. Although it does take some extra effort, I will argue that it largely feasible to make most EEG data analysis fully reproducible. 
Disclosure: I am the EEG-BIDS  and Electrophysiology-BIDS derivatives co-lead, lead developer of the LIMO-MEG toolbox and collaborate with EEGLAB and as such my views are totally biased.

14.01.2021 | 16h CET – What is serially-dependent perception good for?

SpeakerMauro Manassi (University of Aberdeen)

Abstract: Perception can be strongly serially-dependent (i.e. biased toward previously seen stimuli). Recently, serial dependencies in perception were proposed as a mechanism for perceptual stability, increasing the apparent continuity of the complex environments we experience in everyday life. For example, stable scene perception can be actively achieved by the visual system through global serial dependencies, a special kind of serial dependence between summary statistical representations. Serial dependence occurs also between emotional expressions, but it is highly selective for the same identity. Overall, these results further support the notion of serial dependence as a global, highly specialized, and purposeful mechanism. However, serial dependence could also be a deleterious phenomenon in unnatural or unpredictable situations, such as visual search in radiological scans, biasing current judgments toward previous ones even when accurate and unbiased perception is needed. For example, observers make consistent perceptual errors when classifying a tumor- like shape on the current trial, seeing it as more similar to the shape presented on the previous trial. In a separate localization test, observers make consistent errors when reporting the perceived position of an objects on the current trial, mislocalizing it toward the position in the preceding trial. Taken together, these results show two opposite sides of serial dependence; it can be a beneficial mechanism which promotes perceptual stability, but at the same time a deleterious mechanism which impairs our percept when fine recognition is needed.

21.01.2021 | 16h CET – Intra- and interindividual differences in person perception

SpeakerMaximilian Broda (University of Giessen)

Abstract: Vision creates an individually unique window to our world. Past research has shown that individuals show highly consistent differences in their gaze behavior towards certain semantic features of a scene, but it is unclear whether further such features of individual divergence exist. Here, I will present ongoing work zooming in on person perception. A first line of work expanded the annotations of an existing stimulus set with pixel masks for body and face parts. Preliminary results of eye-tracking experiments using these stimuli suggest consistent individual differences, especially in the tendency to fixate mouth and eye regions. A second line of work focusses on intraindividual differences in face processing across time and across the visual field. Previous work has shown that the recognition of inner face features is best at their expected / usual location in the visual field. To date, it is still unclear if this enhanced recognition performance is a learned adaptation to input statistics or following an innate, face-specific template. The ongoing global pandemic with a surge of people wearing face masks gives us an exceptional chance to investigate the processing of artificial face features. I will present preliminary fMRI results suggesting that over time the ventral processing of face masks becomes increasingly similar to that of faces. This result is currently being followed-up by behavioral experiments testing whether the feature-location contingency observed for natural facial features can be found for artificial features as well. Taken together, these results suggest that individual gaze behavior and perception is the result of an interplay between our unique visual brain and environment.

28.01.2021 | 16h CET – TBA

SpeakerBasil Preisig (University of Zürich)