New online survey available! We have developed a new survey to test how well the information of a signed weather forecast (by a human signer) is transferred by the different technologies we are exploring in the project. We would like to ask you to take a few minutes in supporting our project by completing the survey. This just takes about 20 minutes. The questionnaire is available in VGT/Dutch. Please click on this link to view the questionnaire: http://c4all-vgt.human-factors-consult.de/
Automated Sign Language Production is the automatic translation from a spoken language to a sign language. The ultimate goal would be to automatically produce a video of a signer that is indistinguishable from a real person. If done well, this could be an important technology for the Deaf community, increasing the quantity of sign language content available. But of course, generating translations of the same quality as a human interpreter is extremely challenging.
Previous work into computational sign language has mainly focused on the recognition of sign language from a video, or a translation to spoken language, the opposite task from production. However, recognition is more applicable to the hearing, as it allows them to understand the Deaf. A native sign language user does not need a system that recognises sign. However, systems that can accurately translate and produce sign would be of use to the Deaf. Not to replace human interpreters, but to provide translation into native sign language when an interpreter is not available.
In the Content4All project, we having been working on an initial technology for automatic sign language production, starting with a focus on comprehension of the productions rather than the visual quality. We have utilised the state of the art research in deep learning and machine translation to develop a prototype that translates from German to DGS within a limited topic. The current production is in the form of a skeleton signer, with the body, hand and face joints matching the movements of a signer.
This is very much a work in progress and we know that a lot more work is required before the technology can be used by the end user. However, we believe it is important to get early feedback from the Deaf community on our sign language production technology, in order to appreciate how understandable our current productions are and how they can be improved. We believe the Deaf communities should be empowered and be involved in all steps of the development of any technology that is targeting their native languages. Hence, we created this survey to get your invaluable feedback and to guide the next steps of the research. Initially we are focusing on understandable sign language production and we would appreciate your opinions on the comprehension of sign rather than the look of the avatar. We greatly value your input into our technology and by completing this survey we will strive to take all opinions on board. Check the survey in the survey tab on our website… Update note: The survey is now closed, thank you for your participation!
The online survey was conducted between mid-January and end-March 2020. You can find more information on the Survey page.
With our survey we would like to find out, how exactly a TV picture should look like, in which a virtual sign language interpreter translates spoken language into sign language. For this purpose, we need to know what deaf TV viewers want from such a system. We will start with general questions such as your age and language. After that, we will show you some TV pictures and will ask you how much you like them. At the end of the questionnaire, you can assemble your own TV picture in the way you prefer it.
A second iteration of the virtual human has been captured in the volumetric video studio of HHI in Berlin, Germany. Two deaf signers (VGT, DSGS) gave their best performance to meet at the end their digital twins. A short impression in following video:
For the first time a joined effort from three European ‘Horizon2020’ projects collaborated in a booth in the Future Zone of IBC showing their research and development products and services targeting ‘Media Accessibility’.
Dive into the exciting European “Horizon2020” collaborative projects, researching and developing products and services targeting media accessibility: Content4All: AI and photorealistic 3D virtual humans for helping broadcaster in generating signed content… ImAc: new deployment methods for Subtitles, Audio Description, Audio Subtitling, Sign Language in immersive environment … EasyTV: media-improved access services for distribution of novel accessibility features …
Hello Astrid, Vaishnavi, (from HFC) you are doing the user research in the Content4All project. Can you first elaborate on your approach towards this research?
Our users, deaf and hard of hearing, use their visual senses much more than hearing people do. That’s why, they have a different experience of watching TV. This is because usually watching TV is an audio-visual experience. Hence, the audio stream has to be provided via some visual output. This output should of course be as accessible as possible to a deaf person. However, what is often unknown is that the written/spoken language is not their mother tongue, but rather sign language. That’s why subtitles are not sufficient in providing an equal TV and media experience for the Deaf Community. In the project, we, therefore, want to develop a technology which enables a greater widespread use of sign language for TV shows such as the weather forecast.
In the process, we followed a user-centered design approach which means that users are involved in each step of the development, from requirements assessment to the evaluation of the technology. Users are involved using different methods such as interviews, workshops, and online questionnaires. This is called ‘multi method approach’. What is also important is that we not only involve the users but also include possible stakeholders such as broadcasters etc so synergies can be exploited immediately.
You mentioned ‘multi method approach’ and user-centered design. Why did you choose these methods for this project? What are the benefits and academic conditions to use these?
The multi method approach helps us to collect data from different theoretical angles which in turn provides us with different insights in terms of quantitative and qualitative data. Additionally, we can cross check converging and diverging requirements stemming from different user samples with different characteristics such as age, affinity for technology, preferences in media technology usage and social media usage. There is no alternative to a user-centered design approach if a technical solution is to be accepted by the users. We at HFC follow this approach regularly. 😊.
One of the topics researched was the media usage. Can you give us the main outcomes? Which medium is mostly used by the Deaf Community?
It’s no surprise that the smart phone is actually used by almost everyone in this world, including the Deaf Community. As younger people are usually digital natives, they obviously use smartphones much more often as compared to older age groups.
One remarkable finding is that deaf people prefer to receive informative content such as news, etc. in their mother tongue but not affective content with visual action such as movies. An easy explanation for this is the visual attention allocation. A good comparative example is when hearing people watch a movie with foreign language subtitles. Multiple visual inputs can be taxing.
The most important finding is that every possible user has his/her own expectations towards the system depending on his/her own abilities and personal preferences.
Conducting the user requirements research emphasised the importance of this project, as nearly all users criticized the lack of signed content in traditional TV. Thus, they have to watch TV with subtitles which is not their mother tongue.
But here, it also showed that young deaf people seem to have a preference for the written languages, unlike older generations.
Are there some other requirements regarding the virtual human that were researched? What were the outcomes -regarding TV the interface- and other accessibility features?
These are questions we are addressing in an online questionnaire just about being launched. In the questionnaire we vary different components of a TV screen with a signer, such as how big the signer is, or whether it has a background frame. The questionnaire will be sent to users in different regions. E.g. in Flanders, Belgium and Switzerland.
Is there a lot of research done regarding these topics or is this unique in the world?
Unfortunately, there is not much applied research regarding accessibility as such, and regarding accessibility and media there is even less research available. There are just a few recommendations for system development in the Universal Design approach for instance or for websites the Web Content Accessibility Guidelines exist. So, we are glad to be able to contribute some insights into media usage of deaf consumers and to support our project from a user research perspective.
The International Broadcasting Convention, more commonly known by its acronym IBC, is an annual trade show, held in September at the RAI Exhibition and Convention Centre in Amsterdam, the Netherlands. IBC’s tagline is “The World’s Most Influential Media, Entertainment & Technology Show” and is aimed at broadcasters, content creators/providers, equipment manufacturers, professional and technical associations, and other participants in the broadcasting, entertainment and technology industry.
Content4All was represented on the booth of SwissTXT in the exhibition area. A full description of the project was explained and people with interest in accessibility were professional guided by Robin Ribback, representative in the Content4All project.