Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
Preprint of: I.R. Murray, J.L. Arnott / Computer Speech and Language 22 (2008) 107129 Applying an analysis of acted vocal emotions to improve the simulation of synthetic speech. Iain R.
 

Summary: Preprint of: I.R. Murray, J.L. Arnott / Computer Speech and Language 22 (2008) 107129
Applying an analysis of acted vocal emotions to improve the simulation of synthetic speech. Iain R.
Murray & John L. Arnott, Computer Speech and Language, Vol.22, No.2, 2008, pp.107-129. DOI:
10.1016/j.csl.2007.06.001
Applying an analysis of acted vocal emotions to improve
the simulation of synthetic speech
IAIN R. MURRAY * and JOHN L. ARNOTT
School of Computing, University of Dundee, Dundee DD1 4HN, U.K.
This is a pre-print report of research published in:
Computer Speech and Language, Vol.22, No.2, 2008, pp.107-129.
Computer Speech and Language is available online at:
http://www.elsevier.com/wps/find/journaldescription.cws_home/622808/description
ISSN: 0885-2308
URL: The DOI bookmark of the article is: http://dx.doi.org/10.1016/j.csl.2007.06.001
DOI: 10.1016/j.csl.2007.06.001
Abstract: All speech produced by humans includes information about the speaker, including conveying
the emotional state of the speaker. It is thus desirable to include vocal affect in any synthetic speech
where improving the naturalness of the speech produced is important. However, the speech factors which
convey affect are poorly understood, and their implementation in synthetic speech systems is not yet
commonplace. A prototype system for the production of emotional synthetic speech using a commercial

  

Source: Arnott, John - School of Computing, University of Dundee

 

Collections: Computer Technologies and Information Sciences; Biology and Medicine