Research

My research interest is in understanding how humans perceive visual objects and the spatial environment. The act of seeing is very complex; and discovering its underlying mechanisms demands a multidisciplinary approach that deals with a variety of issues associated with visual perception and cognition. In concert with this, while psychophysical methods and phenomenological observations are predominantly employed in my research, considerations from neuroscience and computational approaches are vigorously applied in the research design, data analysis and interpretation. Currently, my laboratory focuses on two major areas of research, which are summarized below.

Space Perception and Cognition:

The vivid 3-dimensional perception of the world around us begins with the processing of the 2-dimensional retinal images. How is this feat achieved? In particular, how does our brain process the 2-dimensional retinal information and endow it with the remarkable sensation of 3-dimensional depth? What assumptions or internal laws are implemented at the various neural processing stages to accomplish this? And where do these assumptions come from? To the last question, many researchers believe that they are related to the regularities or rules specified by our niche and ecology. Utilizing these regularities, our brain is able to reduce its coding redundancy and enhance its efficiency. Our research objectives are to reveal what external environmental information is extracted for space perception, and to learn the assumptions and computational steps the brain uses to derive the 3-dimensional perceptual space as it operates on the external information. We currently focus on studying space vision in the intermediate distance range. Our psychophysical research employs various methods (perceptual report, visually directed and guided actions, eye movement and locomotion recordings, etc.) to measure human subjects’ performances both in the real space and virtual reality (VR) environments. Capitalizing on the relatively recent VR technology not only provides us with a more convenient means to manipulate the visual scenes, but also allows us to create novel visual space to learn how the human visual system adapts to new spatial environments. In this way, we can also discover how the visual system "recognizes" new environmental regularities and implements them as rules.

Mechanisms of Middle Level Vision and Perceptual Learning:

The confluence of scientific achievements by the neuroscience, psychophysical, and computational research communities have greatly advanced our understanding of the coding of visual information at the early stage of the visual pathway. By taking advantage of this knowledge, and building on it, our recent research focuses on the question of how visual information is processed at a yet later stage of the visual pathway. This stage is the surface representation level, which serves as the critical stage between the early cortical filtering level and the late object recognition level. Our understanding of the surface representation level is in its infancy and many questions are yet to be answered. Among the questions my laboratory is exploring are: How are the outputs of the early cortical filtering level integrated to form surface representations, and what are the rules used in the integration process? How much does the surface representation level contribute to our immediate perception of the visual world? How does visual information at the middle level affect object representation? What roles do visual attention and memory mechanisms play in the processing of visual information between the surface representation level and object recognition level? Finding these answers are important steps toward understanding the workings of middle level.