Sign Language Transformers: Joint End-to-end Sign Language Recognition and Translation

Necati Cihan Camgöz, Oscar Koller, Simon Hadfield, Richard Bowden: Sign Language Transformers: Joint End-to-end Sign Language Recognition and Translation. In: IEEE Conference on Computer Vision and Pattern Recognition 2020, Forthcoming.

Abstract

Prior work on Sign Language Translation has shown that having a mid-level sign gloss representation(effectively recognizing the individual signs) improves the translation performance drastically. In fact, the current state-of-theart in translation requires gloss level tokenization in order to work. We introduce a novel transformer based architecture that jointly learns Continuous Sign Language Recognition and Translation whilebeing trainable in an end-to-end manner. This is achieved by using a Connectionist Temporal Classification(CTC)loss to bind the recognition and translation problems into a single unified architecture. This joint approach does not require any ground-truth timing information,simultaneously solving two co-dependant sequence to sequence learning problems and leads to significant performance gains. We evaluate the recognition and translation performances of our approaches on the challenging RWTHPHOENIX-Weather-2014T(PHOENIX14T)dataset. Wereport state-of-the-art sign language recognition and translation results achieved by our Sign Language Transformers. Our translation net works out perform both sign video to spoken language and gloss to spoken language translation models, in some cases more than doubling the performance (9.58vs. 21.80BLEU-4Score). We also share new baseline translation results using transformer networks for several other text-to-text sign language translation tasks.

BibTeX (Download)

@inproceedings{surrey854110,
title = {Sign Language Transformers: Joint End-to-end Sign Language Recognition and Translation},
author = {Necati Cihan Camgöz and Oscar Koller and Simon Hadfield and Richard Bowden},
url = {http://epubs.surrey.ac.uk/854110/},
year  = {2020},
date = {2020-02-01},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition 2020},
journal = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020},
abstract = {Prior work on Sign Language Translation has shown that having a mid-level sign gloss representation(effectively recognizing the individual signs) improves the translation performance drastically. In fact, the current state-of-theart in translation requires gloss level tokenization in order to work. We introduce a novel transformer based architecture that jointly learns Continuous Sign Language Recognition and Translation whilebeing trainable in an end-to-end manner. This is achieved by using a Connectionist Temporal Classification(CTC)loss to bind the recognition and translation problems into a single unified architecture. This joint approach does not require any ground-truth timing information,simultaneously solving two co-dependant sequence to sequence learning problems and leads to significant performance gains. We evaluate the recognition and translation performances of our approaches on the challenging RWTHPHOENIX-Weather-2014T(PHOENIX14T)dataset. Wereport state-of-the-art sign language recognition and translation results achieved by our Sign Language Transformers. Our translation net works out perform both sign video to spoken language and gloss to spoken language translation models, in some cases more than doubling the performance (9.58vs. 21.80BLEU-4Score). We also share new baseline translation results using transformer networks for several other text-to-text sign language translation tasks.},
keywords = {University of Surrey},
pubstate = {forthcoming},
tppubtype = {inproceedings}
}