Summary: Preprint of: I.R. Murray, J.L. Arnott / Computer Speech and Language 22 (2008) 107129
Applying an analysis of acted vocal emotions to improve the simulation of synthetic speech. Iain R.
Murray & John L. Arnott, Computer Speech and Language, Vol.22, No.2, 2008, pp.107-129. DOI:
Applying an analysis of acted vocal emotions to improve
the simulation of synthetic speech
IAIN R. MURRAY * and JOHN L. ARNOTT
School of Computing, University of Dundee, Dundee DD1 4HN, U.K.
This is a pre-print report of research published in:
Computer Speech and Language, Vol.22, No.2, 2008, pp.107-129.
Computer Speech and Language is available online at:
URL: The DOI bookmark of the article is: http://dx.doi.org/10.1016/j.csl.2007.06.001
Abstract: All speech produced by humans includes information about the speaker, including conveying
the emotional state of the speaker. It is thus desirable to include vocal affect in any synthetic speech
where improving the naturalness of the speech produced is important. However, the speech factors which
convey affect are poorly understood, and their implementation in synthetic speech systems is not yet
commonplace. A prototype system for the production of emotional synthetic speech using a commercial