Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
A psychoacoustic-masking model to predict the perception of speech-like stimuli in noise q
 

Summary: A psychoacoustic-masking model to predict the perception
of speech-like stimuli in noise q
James J. Hant *, Abeer Alwan
Speech Processing and Auditory Perception Laboratory, Department of Electrical Engineering, School of Engineering and
Applied Sciences, UCLA, 405 Hilgard Avenue, Los Angeles, CA 90095, USA
Received 8 August 2001; received in revised form 8 February 2002; accepted 23 April 2002
Abstract
In this paper, a time/frequency, multi-look masking model is proposed to predict the detection and discrimination of
speech-like stimuli in a variety of noise environments. In the first stage of the model, sound is processed through an
auditory front end which includes bandpass filtering, squaring, time windowing, logarithmic compression and additive
internal noise. The result is an internal representation of time/frequency ``looks'' for each sound stimulus. To detect or
discriminate a signal in noise, the listener combines information across looks using a weighted d0
detection device.
Parameters of the model are fit to previously measured masked thresholds of bandpass noises which vary in bandwidth,
duration, and center frequency (JASA 101 (1997) 2789). The resulting model is successful in predicting masked
thresholds of spectrally shaped noise bursts, glides, and formant transitions of varying durations. The model is also
successful in predicting the discrimination of synthetic plosive CV syllables in a variety of noise environments and vowel
contexts.
2002 Elsevier Science B.V. All rights reserved.
1. Introduction

  

Source: Alwan, Abeer - Electrical Engineering Department, University of California at Los Angeles

 

Collections: Computer Technologies and Information Sciences