Indexed by:
Abstract:
This paper proposes a new approach for generating realistic three-dimensional speech animation. The basic idea is to synthesize the animated faces using prosodic information edited by user with a kind of text markup language. By capturing characteristic trajectories of utterances from video clips, our technique builds up a parametric model based on the exponential formula. Based on this formula the static viseme is extended to dynamic one. To relate the prosody text with the 3D animation, the input attribute is mapped to be the value of formula parameter. Experimental results show that the proposed technique synthesizes animation of different effects depending on the availability with the prosodic information.
Keyword:
Reprint Author's Address:
Email:
Source :
Journal of Beijing University of Technology
ISSN: 0254-0037
Year: 2009
Issue: 12
Volume: 35
Page: 1690-1696
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2