First International Workshop on
Sign Language Translation and Avatar Technology
(SLTAT)

 

10-11 January 2011

 

Bundesministerium fŸr Arbeit und Soziales

(Federal Ministry of Labour and Social Affairs)

Wilhelmstra§e 49

Berlin

 

 

Programme

 

 

 

 

 

Organizers:

Michael Kipp

Alexis Heloir

Thomas Hanke

 

 

 

 

 

supported by:                       

::::::Pictures:Logos:BMAS.gif


Programme

 

SUNDAY, 9 Jan

 

19:30 Gettogether at restaurant "12 Apostel"

 

Georgenstra§e 2 / S-Bahnbšgen 177-180 (streetcar station "Friedrichstra§e")

10117 Berlin

Phone: +49 (0) 30/201 02 22

 

 

 

MONDAY, 10 Jan

 

9:00 Welcome coffee

 

9:30 Invited talk by Matt Huenerfauth

 

10:30 Coffee break

 

11:00 Challenge 2: Avatars - talks (1:45)

 

            (3 presentations ˆ 30 mins incl. discussion/video)

 

An Avatar to Depict Sign Language: Building from Reusable Hand Animation

(Rosalee Wolfe, John McDonald, Jerry Schnepp)

 

Signing Avatars: Linguistic and Computer Animation Challenges

(Sylvie Gibet, Nicolas Courty, Kyle Duarte)

 

Automatic generation of French Sign Language

(M. Delorme, A. Braffort, M. Filhol)

 

 

12:30 Lunch

 


13:45 Challenge 2: Avatars - talks (ctd.)

 

            (4 presentations ˆ 30 mins incl. discussion + video)

 

Overview of Signing Avatar Work at UEA

(JRW Glauert, JR Kennaway, VJ Jennings, R Elliott)

 

A Feasibility Study on Signing Avatars

(Michael Kipp, Alexis Heloir, Quan Nguyen)

 

The ATLAS Interpreter of the Italian Sign Language

(Vincenzo Lombardo, Fabrizio Nunnari, Rossana Damiano)

 

Sign speech synthesis system

(Krnoul et al.)

 

 

16:00 Coffee break

 

 

16:30 Challenge 2: Discussion and problem definition

 

            (incl. video showreel of all avatars)

 

17:15 Coffee break

 

17:30 Challenge 2: Synthesis of possible solutions, future directions, ideas

 

            (roadmap, projects)

 

18:00 End of day

 

 

 

Evening socializing (to be announced)


TUESDAY, 11 Jan

 

8:45 Coffee

 

9:00 Welcome address by Brigitte Lampersbach (BMAS)

 

9:30 Challenge 1: Translation - talks

 

            (3 presentations ˆ 25 mins)

 

Text to SL translation

(A. Braffort and M. Filhol)

 

Challenges in Statistical Sign Language Translation

(Christoph Schmidt, Daniel Stein and Hermann Ney)

 

Body at Work: using corpora in Sign Language Machine Translation

(Sara Morrissey)

 

 

11:00 Coffee break

 

11:30 Challenge 1: Translation - talks

 

            (2 presentations ˆ 25 mins)

 

Machine Translation with Corpus Linguistics and HPSG?

(Eva Safar)

 

Linguistic Processing in the ATLAS Project

(Leonardo Lesmo, Alessandro Mazzei, Daniele Radicioni)

 

 

12:30 Lunch

 

13:30 Challenge 1: Discussion, problem defintion, synthesis

 

15:00 Coffee break

 

15:15 Final discussion: Future directions, projects and workshops

 

16:00 End of the workshop


List of Abstract Submissions

(for each challenge, ordered alphabetically by first author)

 

Challenge 1: Translation (5 talks)

 

Text to SL translation

(A. Braffort and M. Filhol)

 

Linguistic Processing in the ATLAS Project

(Leonardo Lesmo, Alessandro Mazzei, Daniele Radicioni)

 

Body at Work: using corpora in Sign Language Machine Translation

(Sara Morrissey)

 

Machine Translation with Corpus Linguistics and HPSG?

(Eva Safar)

 

Challenges in Statistical Sign Language Translation

(Christoph Schmidt, Daniel Stein and Hermann Ney)

 

 

Challenge 2: Avatar Technology (7 talks + 4 videos)

 

Automatic generation of French Sign Language

(M. Delorme, A. Braffort, M. Filhol)

 

Signing Avatars: Linguistic and Computer Animation Challenges

(Sylvie Gibet, Nicolas Courty, Kyle Duarte)

 

Overview of Signing Avatar Work at UEA (VIDEO)

(JRW Glauert, JR Kennaway, VJ Jennings, R Elliott)

 

A Feasibility Study on Signing Avatars (VIDEO)

(Michael Kipp, Alexis Heloir, Quan Nguyen)

 

Sign speech synthesis system (VIDEO)

(Krnoul et al.)

 

The ATLAS Interpreter of the Italian Sign Language (VIDEO)

(Vincenzo Lombardo, Fabrizio Nunnari, Rossana Damiano)

 

An Avatar to Depict Sign Language: Building from Reusable Hand Animation

(Rosalee Wolfe, John McDonald, Jerry Schnepp)

 


Invited Talk

 

"Cyclic Data-Driven Research on American Sign Language Animation"

 

Matt Huenerfauth

Assistant Professor of Computer Science and Linguistics

The City University of New York (CUNY)

 

Abstract:

American Sign Language (ASL) animation generation software can improve the accessibility of information and services for deaf individuals in the U.S. with low English literacy.  However, comprehension-based evaluation studies with native ASL signers have identified limits in the understandability of current ASL animation systems.  We believe that the complexity of ASL animation generation requires a data-driven approach based on corpora of ASL collected from native signers. 

 

As an example, ASL animations require correct speed, timing, and pauses in order to produce accurate and understandable results -- the analysis of timing patterns in sign language data collected from human signers can be used to set such animation parameters.  Further, the motion-path of individual signs in an ASL sentence can vary greatly, depending on various linguistic factors.  For instance, entities under discussion can be associated with 3D points in space around a signer, and the movements of verb signs are deflected from their standard motion path based on how the subject and object of the verb have been established in the signing space.  Computational models of the motion path of signs and the use of space by signers are necessary for generating natural and understandable ASL sentences -- concatenation of animations of signs from a fixed lexicon would be insufficient for generating correctly inflected signs or natural coarticulation effects. 

 

For this reason, the Linguistic and Assistive Technologies Lab (LATLab) at CUNY has begun a multi-year project to build the first motion-capture corpus of multi-sentential ASL utterances.  Native ASL signers are being recorded performing spontaneous and directed ASL sentences while wearing motion-capture body suits, gloves, eye-trackers, and head-trackers.  This data is being linguistically annotated by native ASL signers to produce a permanent research resource.  While data-driven approaches are mainstream in the computational linguistics community, the size of ASL corpora will remain small (relative to those for written languages) for the foreseeable future -- due to the time-consuming nature of sign language corpora creation and annotation.  To compensate for this data scarcity, our laboratory has employed a cyclic research paradigm that consists of: directed data-collection of a specific linguistic phenomena of interest, creation of models for this phenomena, synthesis of prototype animations based on the new model (and older models for comparison purposes), conduct of an experimental study with native ASL signers evaluating animations via comprehension questions, and iterative refining of the model and re-evaluation.

 

This talk will give an overview of the project, our corpus collection and annotation techniques, our user-based ASL animation evaluation approach, and our current progress.  The key aspects of our research paradigm include: the use of linguistic data on targeted phenomena, the use of experimental studies with native ASL signers answering comprehension questions about animations, and the involvement of native ASL signers in the research process as informants, annotators, and research team members.  In particular, this talk will focus on our laboratory's recent research on producing models of ASL timing and the inflection of ASL verb performances for unseen arrangements of subject/object reference points in the signing space. 

 

Biography:

Matt Huenerfauth has been an assistant professor at The City University of New York (CUNY) since 2006; he is on the faculty of the Department of Computer Science of CUNY Queens College, the Doctoral Program in Computer Science at CUNY Graduate Center, and the Graduate Program in Linguistics at CUNY Graduate Center.  He has taught courses on assistive technology for people with disabilities, artificial intelligence, computational linguistics, human computer interaction, and other topics in computer science and linguistics. Huenerfauth's research focuses on the design of computer technology to benefit people who are deaf or have low levels of written-language literacy.  In 2008, Huenerfauth received a five-year Faculty Early Career Development (CAREER) Award from the National Science Foundation to support his research on American Sign Language animations.  Since 2008, he has served as an Associate Editor of the ACM Transactions on Accessible Computing (TACCESS), the Association of Computing Machinery's journal in the field of assistive technology and accessibility for people with disabilities. In 2005 and 2007, he received the Best Paper Award at the International ACM SIGACCESS Conference on Computers and Accessibility, the major computer science conference on assistive technology for people with disabilities.  He received a Ph.D. (2006) and M.S.E. (2004) in Computer Science from the University of Pennsylvania, a M.Sc. (2002) from University College Dublin, and a M.S. (2001) and H.B.S. (2001) from the University of Delaware.

 

 

 


List of Participants

(sorted by institution)

 

Kyle Duarte, Univ. de Bretagne Sud, France

Sylvie Gibet, Univ. de Bretagne Sud, France

Matt Huenerfauth, City University of New York, USA

John C. Mc Donald, De Paul University, USA

Jerry Schnepp, De Paul University, USA

Rosalee Wolfe, De Paul University, USA

Alexis Heloir, DFKI, Germany

Michael Kipp, DFKI, Germany

Quan Nguyen, DFKI, Germany

Sara Morissey, Dublin City University, Ireland

Sune Nielsen, Univ. of Copenhagen, Denmark

John Glauert, Univ. of East Anglia, Great Britain

Ralph Elliott, Univ. of East Anglia, Great Britain

Richard Kennaway, Univ. of East Anglia, Great Britain

Eva Safar, Univ. of East Anglia, Great Britain

Thomas Hanke, Univ. of Hamburg, Germany

Silke Matthes, Univ. of Hamburg, Germany

Jakob Storz, Univ. of Hamburg, Germany

Satu Worseck, Univ. of Hamburg, Germany

Maxime Delorme, LIMSI CNRS, France

Michael Filhol, LIMSI CNRS, France

Trevor Johnston, Macquarie University, Australia

Leonardo Lesmo, Politecnico di Torino, Italy

Fabrizio Nunnari, Politecnico di Torino, Italy

Paolo Prinetto, Politecnico di Torino, Italy

Gabriele Tiotto, Politecnico di Torino, Italy

Christoph Schmidt, RWTH Aachen, Germany

Uwe Zelle, RWTH Aachen, Germany

Zdenek Krnoul, Univ. of West Bohemia, Czech Rep.

Milos Zelezny, Univ. of West Bohemia, Czech Rep.

Jakub Kanis, Univ. of West Bohemia, Czech Rep.

Pavel Campr, Univ. of West Bohemia, Czech Rep.