Home

  People

  Projects

  Publications

  Bachelor/Master

  Teaching

  Internal

  Links

  Contact

 

Realizing Multimodal Behavior: Closing the gap between behavior planning and embodied agent presentation

Generating coordinated multimodal behavior for an embodied agent (speech, gesture, facial expression...) is challenging. It requires a high degree of animation control, in particular when reactive behaviors are required. We suggest to distinguish realization planning, where gesture and speech are processed symbolically using the behavior markup language (BML), and presentation which is controlled by a lower level animation language (EMBRScript). Reactive behaviors can bypass planning and directly control presentation. In this paper, we show how to define a behavior lexicon, how this lexicon relates to BML and how to resolve timing using formal constraint solvers. We conclude by demonstrating how to integrate reactive emotional behaviors.


Video:

BML and emotion demo video (Quicktime, 7 MB): This video shows how the EMBOTS framework can parse BML and coordinate animation and speech synthesis, while letting reactive signals (e.g. emotion-based face changes) by-pass the main pipeline for immediate reactions.


Publication:

Kipp, M., Heloir, A., Gebhard, P., Schroeder, M. (2010) Realizing Multimodal Behavior: Closing the gap between behavior planning and embodied agent presentation. In: Proceedings of the 10th International Conference on Intelligent Virtual Agents (IVA-10), Springer.

Back to projects