|
Broadly, my lab is focused on gaining a more complete understanding of the cognitive and coritcal processes of attention and perception.
We employ a multidisciplinary research approach combining traditional research methods from cognitive psychology and psychophysics with sophisticated eye tracking methods, computational modeling, and transcranial magnetic stimulation (TMS).
The Laboratory is currently engaged in the following research streams:
Multisensory Integration
We are studying how the brain integrates information from multiple senses to produce a coherent and more reliable perception of the world. To this end, we seek to better understand such problems as: how do the senses interact and alter each other's processing (e.g., like in the ventriloquism effect); how different information from multiple senses are integrated at different levels of the cortical processing hierarchy; and, how is the relative timing of different sensory inputs coded to maintain perception of synchrony between the senses.
Sample publication:
Prime, S.L. & Harris, L.R. (2010). Predicting the position of moving audiovisual stimuli. Experimental Brain Research, 203, 249-260
Crossmodal Attention
Our research also addresses how selective attention and processing information is affected by using multiple modalities. Sometimes directing attention to one sensory modality happens at the expense of others (e.g., talking on a cell phone while driving). Our research in this area is focused on better understanding the crossmodal links in attention and the extent to which audiory and visual attention can interact.
Sample publication:
Richard, C.M., Wright, R.D., Prime, S.L., Ee, C.M., Shimizu, Y., & Vavrik, J. (2002). Effect of a concurrent auditory task on visual search performance in a driving-related image-flicker task. Human Factors, 44, 108-119
Transsaccadic Perception
One of the central questions in visual neuroscience is how do we perceive the visual world as a seamless and unified image despite repeatedly rapid changes in gaze. To explain, humans make about 3-5 rapid eye movements, called saccades, per second. This means that our gaze is focused on one point of a visual scene for only a short time before moving to another point on the scene. Since our visual acuity is limited to our point of gaze, we only process one small part of a visual scene effectively at a time. It follows that in order to perceive the entire visual scene as a unified image, the individual elements of a scene have to be pieced together by serially sampling these different elements using saccadic eye movements. The perceptual experience of a continuous and unified visual world from disparate gazes separated by saccades is known as transsaccadic perception. Our research is aimed at exploring how the visual system builds an internal representation of a visual object or scene across saccades. We also seek to better understand how transsaccadic perception might contribute to maintaining perceptual stability across saccades - i.e., the perception of a stable visual world where the location of objects are perceived to be unchanged despite their positions on the retina dramatically changing with every saccade.
Sample publication:
Prime, S.L., Vesia, M., Crawford, J.D. (2011). Cortical mechanisms for transsaccadic memory and integration of multiple object features. Philosophical Transactions of the Royal Society B: Biological Sciences, 366, 540-553 |
Interested in participating in an experiment?
Click here
Interested in joining the lab?
Click here
|