Senior Data Manager at Clinilabs (2013-Present)

Research Scientist at Global Alliance for Neuroscience, Inc (2012- Present)

Research Engineer at  NYU Langone Medical Center (2011-2012)

Research Associate International Brain Research Foundation  (2010-2011)

PhD, BME Department of City College of New York, (2004 – 2010)

Education

2004 – 2010: PhD BME Department of City College of New York, New York, USA

2001 – 2004: MS EE Department of Peking University Beijing,CHINA

1997 – 2001: BS EE Department of Peking University Beijing, CHINA

 

 

 

Research Areas 

Psychophysics and modeling of audio-visual speech in noise. Tinnitus modeling and psychophysics.

Psychophysics and modeling of audio-visual speech in noise

Watching a speaker’s facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

 

Tinnitus modeling and psychophysics

We hypothesize that tinnitus is the result of a gain-adaptation mechanism that, when confronted with degraded peripheral input, increase neuronal gains such that spontaneous neuronal activity is perceived as a phantom sound. The aim of this study is to find a colleration between the tinnitus percept with measures of peripheral processing on an individual subject basis. We try to predict the tinnitus likeness spretrum from both distortation product otoacoustic emission(DPOAE) and audiogram with high frequency resolution.

 

Journal Publications

  • Wei Ji Ma, Xiang Zhou, Lars A. Ross, John J. Foxe,  Lucas C. Parra, ” Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space,” PLoS ONE, 4 (3) e4639, March 4, 2009. pdf

 

Conference presentations

  • Xiang Zhou, Simon Henin, Glenis Long, Lucas C. Parra, “Spectral profile of tinnitus can be predicted from high-resolution audiogram and DPOAE for a subset of subjects”, 3rd Tinnitus Research Initiative Meeting, Stresa, Italy, June 2009.
  • Xiang Zhou, Suzanne Thompson, Glenis Long, Lucas C. Parra, “Perception thresholds of pure tone in notched noise correlate with generator component of distortion product oto-acoustic emissions”, 3rd Tinnitus Research Initiative Meeting, Stresa, Italy, June 2009.
  • Wei Ji Ma, Xiang Zhou, Lucas C. Parra, “Auditory-Visual Speech Recognition is Consistent with Bayes-Optimal Cue Combination”, Computational and Systems Neuroscience 2008 Salt Lake City, February 2008 (poster). pdf
  • Xiang Zhou, Lars Ross, Tue Lehn-Schioler, John J. Foxe, Lucas C. Parra, “Temporal visual cues aid speech recognition”, 7th Annual Meeting of the International Multisensory Research Forum, Dublin, Ireland, June 18 – 21, 2006 (poster). pdf

Leave a reply

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> 

required