Changes in intonation patterns may convey not only different meaning but different emotions even if the sequence of speech segments are same in a sentence. The patterns change depending upon structure and emotion of the sentence and require being stored in speech database. It is a difficult and time-consuming task to store all utterances of all the expressive style, which also consumes huge memory space. So there should be an approach that minimizes the time and memory space for emotion rich database. A number of studies in this respect have been done for several languages and models developed. However, for Hindi not many studies have been done. Taking this fact in consideration the intonation patterns have been studied for different languages in this paper and analysed for Hindi language. On the basis of dense research on intonation pattern an algorithm has been proposed for emotion conversion. This algorithm only requires storing neutral utterances in the database and other expressive style utterances can be derived from these neutral emotion. Proposed algorithm is based on linear modification model (LMM), where fundamental frequency (F0) is one of the factors to convert emotions. To perform the experiments, an intonational rich database is maintained for four expressive styles; surprise, happiness, anger and sadness. The perception tests also carried out, where group of listeners were asked to listen to the utterances from database and judge the emotion. This perception test involves classification of the emotions already available in the database by the listener and to judge the quality of converted neutral utterances. The results are analysed for four emotions: happiness, anger, surprise and sadness and performance of the experiment is evaluated. The accuracy of perception test on transformed emotions was found out to be 95% for surprise and 93.4% for sadness 82% for happiness and 96.7% for anger.
Key words: Intonation patterns, intonational database, emotion conversion, fundamental frequency (F0), perception test.
Copyright © 2022 Author(s) retain the copyright of this article.
This article is published under the terms of the Creative Commons Attribution License 4.0