National Association of Broadcasters

winner of the NAB Technology Innovation Award 2020

Adversarial Training for Multi-Channel Sign Language Production

Ben Saunders, Necati Cihan Camgöz, Richard Bowden: Adversarial Training for Multi-Channel Sign Language Production. In: The 31st British Machine Vision Virtual Conference, 2020, (Embargo OK Metadata Pending Awaiting final version published online.).

Abstract

Sign Languages are rich multi-channel languages, requiring articulation of both
manual (hands) and non-manual (face and body) features in a precise, intricate manner.
Sign Language Production (SLP), the automatic translation from spoken to sign languages,
must embody this full sign morphology to be truly understandable by the Deaf community.
Previous work has mainly focused on manual feature production, with an under-articulated
output caused by regression to the mean.
In this paper, we propose an Adversarial Multi-Channel approach to SLP. We frame
sign production as a minimax game between a transformer-based Generator and a conditional
Discriminator. Our adversarial discriminator evaluates the realism of sign production
conditioned on the source text, pushing the generator towards a realistic and
articulate output. Additionally, we fully encapsulate sign articulators with the inclusion of
non-manual features, producing facial features and mouthing patterns. We evaluate on the challenging RWTH-PHOENIX-Weather-2014T (PHOENIX14T) dataset, and report state-of-the art SLP back-translation performance for manual production.
We set new benchmarks for the production of multi-channel sign to underpin future
research into realistic SLP.

BibTeX (Download)

@article{surrey858417,
title = {Adversarial Training for Multi-Channel Sign Language Production},
author = {Ben Saunders and Necati Cihan Camgöz and Richard Bowden},
url = {http://epubs.surrey.ac.uk/858417/},
year  = {2020},
date = {2020-08-01},
journal = {The 31st British Machine Vision Virtual Conference},
publisher = {British Machine Vision Association},
abstract = {Sign Languages are rich multi-channel languages, requiring articulation of both 
manual (hands) and non-manual (face and body) features in a precise, intricate manner. 
Sign Language Production (SLP), the automatic translation from spoken to sign languages, 
must embody this full sign morphology to be truly understandable by the Deaf community. 
Previous work has mainly focused on manual feature production, with an under-articulated 
output caused by regression to the mean. 
In this paper, we propose an Adversarial Multi-Channel approach to SLP. We frame 
sign production as a minimax game between a transformer-based Generator and a conditional 
Discriminator. Our adversarial discriminator evaluates the realism of sign production 
conditioned on the source text, pushing the generator towards a realistic and 
articulate output. Additionally, we fully encapsulate sign articulators with the inclusion of 
non-manual features, producing facial features and mouthing patterns. We evaluate on the challenging RWTH-PHOENIX-Weather-2014T (PHOENIX14T) dataset, and report state-of-the art SLP back-translation performance for manual production. 
We set new benchmarks for the production of multi-channel sign to underpin future 
research into realistic SLP.},
note = {Embargo OK 
Metadata Pending 
Awaiting final version published online.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}