In the HULAT group, we launch a new project. Project “Cognitive and sensory accessibility to video conferencing systems (ACCESS2MEET)" , Convocatoria 2020 Proyectos Generación del Conocimiento y Retos Investigación.
Its principal investigator is Lourdes Moreno, and the research team is made up of Paloma Martínez, Belén Ruiz-Mezcua, Isabel Segura-Bedmar, Israel González Carrasco, Jose Luis Lopez-Cuadrado, José Luis Martínez Fernández, Rodrigo Alarcón and Cristóbal Colón Ruiz. It will last for three years.
Currently, we are facing enormous socio-economic challenges that are affected by several factors, such as the ageing population and a substantial widening of the digital divide. Both the inclusion and innovation elements of these challenges must be addressed simultaneously. A significant portion of technologies involves interacting with people through user interfaces in interactive systems. Therefore, it is necessary to design accessible user interfaces which ensure that all people can access and operate these systems regardless of their visual, auditory, cognitive or motor abilities. For this reason, the primary aim of this project is to show the viability and application of accessibility techniques for the design and development of interactive systems that ensure persons with disabilities (PWDs) can access and operate them. Due to the nature of accessibility requirements, this project will use a multidisciplinary approach based on the shared scientific knowledge that methods from fields such as Artificial Intelligence (AI) and Human-Computer Interaction (HCI) can provide. This approach will be to define a user-centred strategy that ensures the participation of PWDs and integrates these techniques in such a way as to provide systematic support to compliance with accessibility requirements, following a regulatory framework in a case study of video conferencing systems in the education and healthcare fields. Video conferencing applications have become increasingly widespread in practically all aspects of our daily lives (home, education, work, healthcare, etc.). These platforms have proliferated and are here to stay, with millions of new users due to the global COVID-19 pandemic. In such a diverse society, steps must be taken in order to prevent digital exclusion from occurring and ensure that everyone can access and operate these applications. There are users with sensory impairments, such as blind people, individuals with low vision, deaf individuals, or those with hearing impairments, who need video conferencing applications that include necessary accessibility requirements, such as services providing audio descriptions of videos and images, interfaces that are adapted in relation to sensory characteristics, subtitling services, voice-operated applications or screen readers. Furthermore, there are cognitive accessibility barriers that affect both people with intellectual disabilities and the elderly, all of whom require intuitive user interfaces, text simplification services, and the generation for text summarization summaries, among other tools, to assist in the understanding of texts, as set out in this project.
As specific contributions, design standards will be defined following HCI methods for the design of accessible and adaptable user interfaces. Additionally, affective computing and computational semiotic techniques will be explored with regard to their application in the interactive elements of user interfaces. Moreover, telematic techniques used to generate subtitles will be researched, as well as the use of AI techniques for the generation of high-quality transcriptions through the post-processing of subtitles. In the area of AI and Natural Language Processing (NLP), approaches will be explored to carry out the lexical simplification and the creation of corpora in the fields of Easy-to-Read texts and Plain Language. Furthermore, deep learning techniques used for text summarization will also be researched.
A user-centred design approach will be followed taking into account the participation of people with disabilities. The research group are members of the Centro Español del Subtitulado y la Audiodescripción (CESYA), In addition, it has the support of EPOs such as MeaningCloud, Pearson Educación, INDRA, Fundación ONCE, Real Patronato para la Discapacidad, Plena Inclusión Madrid, FIAPAS, Colegio tres olivos and FUNKA .