New online survey available!

New online survey available!
We have developed a new survey to test how well the information of a signed weather forecast (by a human signer) is transferred by the different technologies we are exploring in the project.
We would like to ask you to take a few minutes in supporting our project by completing the survey. This just takes about 20 minutes. The questionnaire is available in VGT/Dutch.
Please click on this link to view the questionnaire:

Automated Sign Language to produce signs helping the Deaf, not the hearing…

Automated Sign Language Production is the automatic translation from a spoken language to a sign language. The ultimate goal would be to automatically produce a video of a signer that is indistinguishable from a real person. If done well, this could be an important technology for the Deaf community, increasing the quantity of sign language content available. But of course, generating translations of the same quality as a human interpreter is extremely challenging.

Previous work into computational sign language has mainly focused on the recognition of sign language from a video, or a translation to spoken language, the opposite task from production. However, recognition is more applicable to the hearing, as it allows them to understand the Deaf. A native sign language user does not need a system that recognises sign. However, systems that can accurately translate and produce sign would be of use to the Deaf. Not to replace human interpreters, but to provide translation into native sign language when an interpreter is not available.

In the Content4All project, we having been working on an initial technology for automatic sign language production, starting with a focus on comprehension of the productions rather than the visual quality. We have utilised the state of the art research in deep learning and machine translation to develop a prototype that translates from German to DGS within a limited topic. The current production is in the form of a skeleton signer, with the body, hand and face joints matching the movements of a signer.

This is very much a work in progress and we know that a lot more work is required before the technology can be used by the end user. However, we believe it is important to get early feedback from the Deaf community on our sign language production technology, in order to appreciate how understandable our current productions are and how they can be improved. We believe the Deaf communities should be empowered and be involved in all steps of the development of any technology that is targeting their native languages. Hence, we created this survey to get your invaluable feedback and to guide the next steps of the research. Initially we are focusing on understandable sign language production and we would appreciate your opinions on the comprehension of sign rather than the look of the avatar. We greatly value your input into our technology and by completing this survey we will strive to take all opinions on board. Check the survey in the survey tab on our website
Update note: The survey is now closed, thank you for your participation!

CONTENT4ALL at NAB 2020: winner for the NAB 2020 Technology Innovation Award!!!

Booth assigned to CONTENT4ALL before the NAB2020 event turned into an online and digital one (NAB Express)

We’re proud to announce that the CONTENT4ALL project won the NAB Technology Innovation Award 2020. The NAB Technology Innovation Award was first presented at the 2009 NAB Show award to organizations that bring advanced technology exhibits and demonstrations of significant merit to the NAB Show. Moreover, the nominated exhibit should present advanced research and development projects in communications technologies that have not yet been commercialized [1].

CONTENT4ALL was also supposed to exhibit the latest project advancements, including a live demo, at NAB 2020 Futures Park before the event turned into a fully online and digital one.

The award ceremony took place during the BEITC Opening Presentation segment scheduled on May 13th between noon and 1:00 pm Eastern at the NAB Show Express event on the BEIT Channel.

[1] NAB Technology Innovation Award webpage

Vertigo Starts artistic residency inspired by CONTENT4ALL presented: “Muted, a fall through silence” by Christophe Monchalin

Within the framework of the VERTIGO STARTS initiative of the European Commission, CONTENT4ALL hosted an artistic residency between 2019 and 2020.

Christophe Monchalin, the artist funded by Vertigo, found inspiration from the project technologies, in particular those of our partner Fraunhofer HHI, to realize its artwork exploring the topics of abandonment through different angles – infinite ones in his virtual reality artistic installation.

MUTED is the name of the artistic installation and was finally presented at the CENTQUATRE – PARIS on Saturday, Feb. 29th and Sunday, Mar. 1st during the Vertigo Starts Residencies Day.

MUTED can we visited here: and a “Making of” video is also available online.

Online questionnaire Content4All

The online survey was conducted between mid-January and end-March 2020. You can find more information on the Survey page.

With our survey we would like to find out, how exactly a TV picture should look like, in which a virtual sign language interpreter translates spoken language into sign language. For this purpose, we need to know what deaf TV viewers want from such a system. We will start with general questions such as your age and language. After that, we will show you some TV pictures and will ask you how much you like them. At the end of the questionnaire, you can assemble your own TV picture in the way you prefer it.

Content4All at IBC, FutureZone in the Media Accessibility booth together with other Horizon2020 projects: ImAc and EasyTV

For the first time a joined effort from three European ‘Horizon2020’ projects collaborated in a booth in the Future Zone of IBC showing their research and development products and services targeting ‘Media Accessibility’.

Dive into the exciting European “Horizon2020” collaborative projects, researching and developing products and services targeting media accessibility:  Content4All:  AI and photorealistic 3D virtual humans for helping broadcaster in generating signed content… ImAc: new deployment methods for Subtitles, Audio Description, Audio Subtitling, Sign Language in immersive environment … EasyTV: media-improved access services for distribution of novel accessibility features …

Media Accessibility Booth at IBC 2019

HFC concludes unique user research as part of Content4All: Deaf community gives input on media behaviours and requirements to use virtual humans in media formats. They explain how it worked in the following interview.

The HFC team during the Lugano general assembly.
  • Hello Astrid, Vaishnavi, (from HFC) you are doing the user research in the Content4All project.  Can you first elaborate on your approach towards this research?

Our users, deaf and hard of hearing, use their visual senses much more than hearing people do. That’s why, they have a different experience of watching TV. This is because usually watching TV is an audio-visual experience. Hence, the audio stream has to be provided via some visual output. This output should of course be as accessible as possible to a deaf person. However, what is often unknown is that the written/spoken language is not their mother tongue, but rather sign language. That’s why subtitles are not sufficient in providing an equal TV and media experience for the Deaf Community. In the project, we, therefore, want to develop a technology which enables a greater widespread use of sign language for TV shows such as the weather forecast.

In the process, we followed a user-centered design approach which means that users are involved in each step of the development, from requirements assessment to the evaluation of the technology. Users are involved using different methods such as interviews, workshops, and online questionnaires. This is called ‘multi method approach’. What is also important is that we not only involve the users but also include possible stakeholders such as broadcasters etc so synergies can be exploited immediately.

  • You mentioned ‘multi method approach’ and user-centered design. Why did you choose these methods for this project? What are the benefits and academic conditions to use these?

The multi method approach helps us to collect data from different theoretical angles which in turn provides us with different insights in terms of quantitative and qualitative data. Additionally, we can cross check converging and diverging requirements stemming from different user samples with different characteristics such as age, affinity for technology, preferences in media technology usage and social media usage. There is no alternative to a user-centered design approach if a technical solution is to be accepted by the users. We at HFC follow this approach regularly. 😊.

  • One of the topics researched was the media usage. Can you give us the main outcomes? Which medium is mostly used by the Deaf Community?

It’s no surprise that the smart phone is actually used by almost everyone in this world, including the Deaf Community. As younger people are usually digital natives, they obviously use smartphones much more often as compared to older age groups.

One remarkable finding is that deaf people prefer to receive informative content such as news, etc. in their mother tongue but not affective content with visual action such as movies. An easy explanation for this is the visual attention allocation. A good comparative example is when hearing people watch a movie with foreign language subtitles. Multiple visual inputs can be taxing.

The most important finding is that every possible user has his/her own expectations towards the system depending on his/her own abilities and personal preferences.

Conducting the user requirements research emphasised the importance of this project, as nearly all users criticized the lack of signed content in traditional TV. Thus, they have to watch TV with subtitles which is not their mother tongue.

But here, it also showed that young deaf people seem to have a preference for the written languages, unlike older generations.

  • Are there some other requirements regarding the virtual human that were researched? What were the outcomes -regarding TV the interface- and other accessibility features?

These are questions we are addressing in an online questionnaire just about being launched. In the questionnaire we vary different components of a TV screen with a signer, such as how big the signer is, or whether it has a background frame. The questionnaire will be sent to users in different regions. E.g. in Flanders, Belgium and Switzerland.

  • Is there a lot of research done regarding these topics or is this unique in the world?

Unfortunately, there is not much applied research regarding accessibility as such, and regarding accessibility and media there is even less research available. There are just a few recommendations for system development in the Universal Design approach for instance or for websites the Web Content Accessibility Guidelines exist. So, we are glad to be able to contribute some insights into media usage of deaf consumers and to support our project from a user research perspective.

Thank you for the interview!

CONTENT4ALL co-organized the workshop “Responsible research and innovation: Participatory, Inclusive and Accessible Media” at NEM Summit 2019

Responsible research and innovation (RRI) is an approach that anticipates and assesses potential implications and societal expectations with regard to research and innovation, with the aim to foster the design of inclusive and sustainable research and innovation. As it grows in importance in H2020 calls addressing societal challenges, RRI approaches are likely to become a decisive aspect in Horizon Europe where industrial competitiveness is tightly interweaved to global challenges.
While media technologies contribute to shaping the world and have an extraordinary impact in the quality of democracy and in social cohesion, the notion of responsible media, and its implications throughout the R&D workflow from production to delivery, are often overlooked. This causes tensions with regulatory frameworks and lift barriers for market uptake and user acceptance.

Directives such as the refit of Audiovisual Media Services Directive (AVMSD) and accessible web should find EU media industry ready and should be seen as an opportunity to turn what up to now was considered expensive and problematic into normalized and profitable services. Similarly, in order to grow competitive in a responsible way, media technologies ought to ensure social inclusiveness, representativeness and acceptability.

The keynotes and panel gave an overview of the RRI applied to media and content technologies. They looked into methods for levering collective creativity and intelligence through participatory design and social innovation approaches. Moreover, they analyzed the latest EU funded research on accessibility technologies for interaction to content, as required by the AVMSD directive, in all formats and looking at alternative working scenarios such as crowdsourcing, or remote for social, sustainable and inclusive society.

This was the agenda of the event

PROVENANCE: an Intermediary-free solution enabling citizens to evaluate online content, Angel Sola (Fundación Cibervoluntarios)
Participatory design for responsible media and interactive technologies, Patricia Castillo (Eurecat)
Accessibility and Media: Towards an European platform for inclusive media (Accessibility Cluster: ImAc, EasyTV, Content4All, SIGN-HUB), Giacomo Inches (Fincons Group)
A curriculum for practitioner and professional creators of audiovisual accessible content, Rocio Bernabe (SDI München)

Roundtable discussion with the session speakers, Mario Montagud (i2Cat), and Federico Álvarez (UPM)

Session Chairs: Patricia Castillo (Eurecat) and Giacomo Inches (Fincons Group);

Moderator: Andrea Nicolai (T6 Ecosystems)