The online survey was conducted between mid-January and end-March 2020. You can find more information on the Survey page.
With our survey we would like to find out, how exactly a TV picture should look like, in which a virtual sign language interpreter translates spoken language into sign language. For this purpose, we need to know what deaf TV viewers want from such a system. We will start with general questions such as your age and language. After that, we will show you some TV pictures and will ask you how much you like them. At the end of the questionnaire, you can assemble your own TV picture in the way you prefer it.
A second iteration of the virtual human has been captured in the volumetric video studio of HHI in Berlin, Germany. Two deaf signers (VGT, DSGS) gave their best performance to meet at the end their digital twins. A short impression in following video:
- Hello Astrid, Vaishnavi, (from HFC) you are doing the user research in the Content4All project. Can you first elaborate on your approach towards this research?
Our users, deaf and hard of hearing, use their visual senses much more than hearing people do. That’s why, they have a different experience of watching TV. This is because usually watching TV is an audio-visual experience. Hence, the audio stream has to be provided via some visual output. This output should of course be as accessible as possible to a deaf person. However, what is often unknown is that the written/spoken language is not their mother tongue, but rather sign language. That’s why subtitles are not sufficient in providing an equal TV and media experience for the Deaf Community. In the project, we, therefore, want to develop a technology which enables a greater widespread use of sign language for TV shows such as the weather forecast.
In the process, we followed a user-centered design approach which means that users are involved in each step of the development, from requirements assessment to the evaluation of the technology. Users are involved using different methods such as interviews, workshops, and online questionnaires. This is called ‘multi method approach’. What is also important is that we not only involve the users but also include possible stakeholders such as broadcasters etc so synergies can be exploited immediately.
- You mentioned ‘multi method approach’ and user-centered design. Why did you choose these methods for this project? What are the benefits and academic conditions to use these?
The multi method approach helps us to collect data from different theoretical angles which in turn provides us with different insights in terms of quantitative and qualitative data. Additionally, we can cross check converging and diverging requirements stemming from different user samples with different characteristics such as age, affinity for technology, preferences in media technology usage and social media usage. There is no alternative to a user-centered design approach if a technical solution is to be accepted by the users. We at HFC follow this approach regularly. 😊.
- One of the topics researched was the media usage. Can you give us the main outcomes? Which medium is mostly used by the Deaf Community?
It’s no surprise that the smart phone is actually used by almost everyone in this world, including the Deaf Community. As younger people are usually digital natives, they obviously use smartphones much more often as compared to older age groups.
One remarkable finding is that deaf people prefer to receive informative content such as news, etc. in their mother tongue but not affective content with visual action such as movies. An easy explanation for this is the visual attention allocation. A good comparative example is when hearing people watch a movie with foreign language subtitles. Multiple visual inputs can be taxing.
The most important finding is that every possible user has his/her own expectations towards the system depending on his/her own abilities and personal preferences.
Conducting the user requirements research emphasised the importance of this project, as nearly all users criticized the lack of signed content in traditional TV. Thus, they have to watch TV with subtitles which is not their mother tongue.
But here, it also showed that young deaf people seem to have a preference for the written languages, unlike older generations.
- Are there some other requirements regarding the virtual human that were researched? What were the outcomes -regarding TV the interface- and other accessibility features?
These are questions we are addressing in an online questionnaire just about being launched. In the questionnaire we vary different components of a TV screen with a signer, such as how big the signer is, or whether it has a background frame. The questionnaire will be sent to users in different regions. E.g. in Flanders, Belgium and Switzerland.
- Is there a lot of research done regarding these topics or is this unique in the world?
Unfortunately, there is not much applied research regarding accessibility as such, and regarding accessibility and media there is even less research available. There are just a few recommendations for system development in the Universal Design approach for instance or for websites the Web Content Accessibility Guidelines exist. So, we are glad to be able to contribute some insights into media usage of deaf consumers and to support our project from a user research perspective.
Thank you for the interview!