Home

  People

  Projects

  Software

  Publications

  Bachelor/Master

  Teaching

  Internal

  Links

  Contact

 

2011


Machbarkeitsstudie zur Abschätzung der Nutzungsmöglichkeiten von Gebädenavataren

Ein Avatar ist eine künstliche Figur in einer virtuellen Welt. Avatare könnten eingesetzt werden, um dynamische Texte von Internetseiten automatisch in Gebärdensprache übersetzen zu lassen. Dies könnte eine erfolgversprechende, langfristige Lösung sein, um Internetangebote für gehörlose Menschen barrierefrei zu gestalten. Bisher liegt die Verständlichkeit von Avataren jedoch nur bei ca. 60%. Bei einer verbesserten Verständlichkeit könnten die Einsatzmöglichkeiten von Gebärdenavataren zusätzlich ausgeweitet werden.

Weitere mögliche Einsatzgebiete von Avataren könnten sein:

  • Helfer bei Alltagssituationen (wie Zahnarztbesuch)
  • Jobsuche
  • Hilfe bei Wohnungssuche

In unserer Machbarkeistudie möchten wir eine kritische Bestandsaufnahme machen und mögliche technische Entwicklungen zusammenfassen. Dadurch sollen die Möglichkeiten und Grenzen des Einsatzes von Gebärdenavataren besser eingeschätzt werden. Ebenso möchten wir die Frage beantworten, ob wir mit dem Projekt "Avatarforschung" einen wesentlichen Fortschritt in der Nutzung von Gebärdenavataren erreichen können.

More...




INTAKT - Interaktive Avatar Kommunikations-Technologie

INTAKT is a project funded by the German ministry of research and education to foster cooperation between small/medium-sized enterprises and research. In INTAKT we explore the tight coupling of character animation technology and intelligent authoring tools. We cooperate with the character animation company Charamel GmbH and are working on extensions of our Scenemaker tool. A first milestone application consisted of an instrumented supermarket (cooperation with the Innovative Retail Laboratory, St. Wendel) where multiple virtual shop assistants give advise and talk to the user and the user's personal (mobile) agent.

More...


2010

Multitouch Puppetry: Creating coordinated 3D motion for an articulated arm

Controlling a high-dimensional structure like a 3D humanoid skeleton is a challenging task. Intuitive interfaces that allow non-experts to perform character animation with standard input devices would open up many possibilities. Therefore, we propose a novel multitouch interface for simultaneously controlling the many degrees of freedom of a human arm. We combine standard multitouch techniques and a morph map into a bimanual interface, and evaluate this interface in a three-layered user study with repeated interactions.

More...



ITeach: Evaluating virtual character benefits on a learning task with repeated interactions

Embodied agents have the potential to become a highly natural human-computer interaction device. However, it remains an open question whether adding an agent to an application has a measurable impact, positive or negative, in terms of motivation and learning performance. Prior studies are very diverse with respect to design, statistical power and outcome; and repeated interactions are rarely considered. We present a controlled user study of a vocabulary trainer application that evaluates the effect on motivation and learning performance. Subjects interacted either with a no-agent and with-agent version in a between-subjects design in repeated sessions. As opposed to prior work (e.g. Persona Effect), we found neither positive nor negative effects on motivation and learning performance, i.e. a Persona Zero-Effect.

More...


Realizing Multimodal Behavior: Closing the gap between behavior planning and embodied agent presentation

Generating coordinated multimodal behavior for an embodied agent (speech, gesture, facial expression...) requires a high degree of animation control, in particular when reactive behaviors are required. We suggest to distinguish realization planning, where gesture and speech are processed symbolically using the behavior markup language (BML), and presentation which is controlled by a lower level animation language (EMBRScript). Reactive behaviors can bypass planning and directly control presentation.

More...


2009




EMBR: A Realtime Animation Engine for Interactive Embodied Agents

Embodied agents are a powerful paradigm for current and future multimodal interfaces, yet require high effort and expertise for their creation, assembly and animation control. Therefore, open animation engines and high-level control languages are required to make embodied agents accessible to researchers and developers. We present EMBR, a new realtime character animation engine that offers a high degree of animation control via the EMBRScript language.

EMBR Homepage




Annotation of Human Gesture using 3D Skeleton Controls

The manual transcription of human gesture behavior from video for linguistic analysis is a work-intensive process that results in a rather coarse description of the original motion. We present a novel approach for transcribing gestural movements: by overlaying an articulated 3D skeleton onto the video frame(s) the human coder can replicate original motions on a pose-by-pose basis by manipulating the skeleton. Our tool is integrated in the ANVIL tool so that both symbolic interval data and 3D pose data can be entered in a single tool.

More...




INTAKT - Interaktive Avatar Kommunikations-Technologie

INTAKT is a project funded by the German ministry of research and education to foster cooperation between small/medium-sized enterprises and research. In INTAKT we explore the tight coupling of character animation technology and intelligent authoring tools. We cooperate with the character animation company Charamel GmbH and are working on extensions of our Scenemaker tool. A first milestone application consisted of an instrumented supermarket (cooperation with the Innovative Retail Laboratory, St. Wendel) where multiple virtual shop assistants give advise and talk to the user and the user's personal (mobile) agent.

More...




THEACO: Gesture and Emotion: Can basic gestural form features discriminate emotions?

The question how exactly gesture and emotion are interrelated is still sparsely covered in research, yet highly relevant for building affective artificial agents. In our study, we investigate how basic gestural form features (handedness, hand shape, palm orientation and motion direction) are related to components of emotion. Our results indicate that there may be a universal association of gesture handedness with the emotional dimensions of pleasure and arousal.

More...


IVAN: A plan-based approach for affective sports commentary in real-time

The IVAN system (Intelligent Interactive Virtual Agent Narrators) generates affective commentary on a tennis game that is given as an annotated video in real-time. The system employs two distinguishable virtual agents that have different roles (TV commentator, expert), personality profiles, and positive, neutral, or negative attitudes to the players. The system uses an HTN planner to generate dialogues which enables to plan large dialogue contributions and generate alternative plans.

More...


2008



IGaze - Studying reactive gaze behavior in semi-immersive human-avatar interactions

IGaze is a semi-immersive human-avatar interaction system. Using head tracking and an illusionistic 3D effect we let users interact with a talking avatar in an application interview scenario. The avatar features reactive gaze behavior that adapts to the user position according to exchangeable gaze strategies. In user studies we showed that two gaze strategies successfully convey the intended impression of dominance/submission and that the 3D effect was positively received. We argue that IGaze is a suitable setup for exploring reactive nonverbal behavior synthesis in human-avatar interactions.

More...

Toward Natural Gesture Synthesis: Gesture Modeling and Animation Based on a Probabilistic Recreation of Speaker Style

In this cooperation between DFKI and MPI Informatik we achieved to generate and animate style-consistent manual gestures. The gesture style was modelled from TV material of human speakers using hand-coded annotations, semantic tags and Markov models. The runtime system can automatically generate coverbal gestures for a new input text, according to the style of the modelled human speaker.

More...


IDEAS4Games (AI Poker)

We present two virtual characters in an interactive poker game using RFID-tagged poker cards for the interaction. To support the game creation process, we have combined models, methods, and technology that are currently investigated in the ECA research field in a unique way. A powerful and easy-to-use multimodal dialog authoring tool is used for the modeling of game content and interaction. The poker characters rely on a sophisticated model of affect and a state-of-the art speech synthesizer.

More...


ERIC - A Generic Rule-based Framework for an Affective Embodied Commentary Agent

ERIC is an affective embodied agent for realtime commentary in many domains. The underlying architecture is rule-based, generic, and lightweight - based on Java/Jess modules. Apart from reasoning about dynamically changing events, the system can produce coherent natural language and non-verbal behaviour, based on a layered model of affect (personality, mood, emotion).

More...



Before 2008





Annotation of Multimodal Behavior

ANVIL is a Java-based tool, free for research and education. It allows the systematic annotation of digital video on multiple layers. The user can define own coding schemes and is provided with an intuitive and efficient graphical interface.

ANVIL has recently been extended to visualize motion capture data with a 3D skeleton.

We have co-organized the internationl workshop series on "Multimodal Corpora", held in conjunction with the biannual LREC confereces: www.multimodal-corpora.org.

ANVIL Website: www.anvil-software.de

Selected publications:

Heloir, A., Neff, M., Kipp, M. (2010) Exploiting Motion Capture for Virtual Human Animation. In: Proceedings of the Workshop "Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality" at LREC-2010, ELDA, Paris.

Kipp, M. (2008) Spatiotemporal Coding in ANVIL. In: Proceedings of the 6th international conference on Language Resources and Evaluation (LREC-08).

Kipp, M. (2001) Anvil - A Generic Annotation Tool for Multimodal Dialogue. In: Proceedings of the 7th European Conference on Speech Communication and Technology (Eurospeech-01), Aalborg, pp. 1367-1370.


ALMA - A Layered Model of Affect

ALMA is a computational model for the real-time simulation of three basic affect types that human beings can experience. ALMA supports several methods to generate affect and it realizes the interference of different affect types. Based on a kind of cognitive appraisal different affect types are simulated in a hierarchical generation process.

Behavior from human beings, especially interpersonal communication behavior, is essentially influenced by affect. Simulated affect can be exploited for virtual characters used in human computer interfaces in order to make them more believable.

Website: ALMA homepage

Selected publications:
Gebhard, P. (2005) ALMA - A Layered Model of Affect. In: Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS'05), 29-36, Utrecht.

Gebhard, P. and Kipp, K.H. (2006) Are Computer-generated Emotions and Moods plausible to Humans? In: Proceedings of the 6th International Conference on Intelligent Virtual Agents (IVA'06), 343-356, Marina Del Rey, USA, 2006.




Scenemaker - A state-based authoring tool for interactive embodied agents applications

SceneMaker is a tool for authoring scenes for adaptive, interactive performances. These performances are based on automatically generated and prescripted scenes which can be authored with the SceneMaker in a two-step approach: In step one, the scene flow is defined using cascaded finite state machines. In a second step, the content of each scene must be provided. This can be done either manually by using a simple scripting language, or by integrating scenes which are automatically generated at runtime based on a domain and dialogue model. Both scene types can be interweaved in our plan-based, distributed platform. The system provides a context memory with access functions that can be used by the author to make scenes user-adaptive. The SceneMaker toolkit should enable the non-expert to compose adaptive, interactive performances in a rapid prototyping approach.

Selected publications:

Patrick Gebhard, Michael Kipp, Martin Klesen and Thomas Rist (2003) Authoring Scenes for Adaptive, Interactive Performances. In: Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS-03), ACM Press, New York, pp. 725-732.

Klesen, M., Kipp, M., Gebhard, P. and Rist, T. (2003) Staging Exhibitions: Methods and tools for modelling narrative structure to produce interactive performances with virtual actors. In: Virtual Reality. Special Issue on Storytelling in Virtual Environments 7 (1), pp. 17-29, Springer.


Ligabot

Ligabot is an embodied agent who answers questions about soccer using natural language input and output. Located at the Deutsches Museum, Munich, it exemplifies language technology research conducted by Prof. Wahlster who won the Zukunftspreis (Future Award) of the German President in 2001.

A cooperation with Sympalog and Charamel.

Researchers: Alassane Ndiaye, Michael Kipp

VirtualConstructor (COHIBIT)

At Volkwagen's Autostadt in Wolfsburg, two virtual characters, Jara and Taron, invite visitors to build model cars with real car pieces. Using camera and RFID technology for input, the two characters seamlessly interact with the real world. They are controlled using statecharts, modelled in our Scenemaker tool. The project was also presented at CeBIT-07.

A cooperation with Augsburg University, Charamel and Autostadt.

Researchers: Alassane Ndiaye, Patrick Gebhard, Michael Kipp, Martin Rumpler, Michael Schneider, Gernot Gebhard.

DFKI project website
Autostadt exhibit website

Publications:

Kipp, M., Kipp, K.H., Ndiaye, A. and Gebhard, P. (2006) Evaluating the Tangible Interface and Virtual Characters in the Interactive COHIBIT Exhibit In: Proceedings of the 6th International Conference on Intelligent Virtual Agents (IVA 2006), Springer, pp. 434-444.

Ndiaye, A., Gebhard, P., Kipp, M., Klesen, M., Schneider, M. and Wahlster, W. (2005) Ambient Intelligence in Edutainment: Tangible Interaction with Life-Like Exhibit Guides In: Proceedings of the INTETAIN 2005, Springer, pp. 104-113.

Kipp, M. (2006) Creativity meets Automation: Combining Nonverbal Action Authoring with Rules and Machine Learning In: Proceedings of the 6th International Conference on Intelligent Virtual Agents (IVA 2006), Springer, pp. 230-242.

 
>