4th Workshop on Speech and Language Processing for Assistive Technologies

ACL Special Interest Group on Speech and Language Processing for Assistive Technologies


Program

This is an overview of the program of SLPAT 2013. More details can be found on the accepted papers page.

Start time Wednesday 21 August Thursday 22 August
09:00 Registration + Coffee Special topic session
(4 papers)
09:45 Opening
10:00 Invited talk by Mark Hawley
(see below)
11:00 Coffee
11:30 Regular session
(2 papers)
Regular session
(2 papers)
12:30 Lunch
14:00 User panel
(see below)
Regular session
(2 papers)
15:00 Coffee
15:30 Demo/poster session
(7 papers)

Closing

followed by

Smart home tour
(see below)

16:30 Regular session
(2 papers)
17:30 Business meeting
SIG-SLPAT
19:30   Gala dinner
together with the
WASSS workshop (see below)

User Panel

Following the format of the three previous SLPAT workshops, we will have a unique panel discussion involving not only those working on researching and developing assistive technologies, but also consumers of these technologies. This "user panel" has been a highlight of all previous workshops. Participants will have the unique opportunity to see and listen user interacting with some dedicated technology and to ask questions to this users panel.

Smart-home Tour

This year we will also have a tour of the "smart home" of the Laboratoire d'Informatique de Grenoble. This smart home called "DOMUS" is a tool for the researchers working on smart space and ambient intelligence. DOMUS is a 40 sqm flat composed of classical rooms (e.g., office, bedroom, bathroom and kitchen with dining area) and has a real furniture. The entire apartment fitted with many sensors and actuators and is controlled by a home automation system. Domus is used to test the validity, acceptability, usefulness and usability of innovative systems in the philosophy of Living Labs.

Gala dinner

The workshop ends with a joint dinner, together with the co-located WASSS workshop. The dinner will take place the at the restaurant Le Téléphérique, located at La Bastille, and we will go there by cable car. See the venue page for information on how to get to the dinner.


Invited Speaker: Professor Mark Hawley

Picture of Mark Hawley

Mark Hawley is Professor of Health Services Research at the University of Sheffield, where he leads the Rehabilitation and Assistive Technology Research Group. He is also Honorary Consultant Clinical Scientist at Barnsley Hospital, where he is Head of Medical Physics and Clinical Engineering. Over the last 20 years, he has worked as a clinician and researcher – providing, researching, developing and evaluating assistive technology, telehealth and telecare products and services for disabled people, older people and people with long-term conditions.

Mark is Director of the Centre for Assistive Technology and Connected Healthcare (CATCH) at the university. He leads a number of projects funded by the National Institute for Health Research and Technology Strategy Board, and leads the Assistive Technology theme of the Devices for Dignity Healthcare Technology Cooperative. He is a founder non-Executive Director of Medipex Ltd., the NHS Innovation Hub for Yorkshire and the Humber. In 2007, he was awarded the Honorary Fellowship of The Royal College of Speech and Language Therapists for his service to speech therapy research.

Title and Abstract

SLPAT in practice: lessons from translational research

The talk will distil experience and results from several projects, over more than a decade, which have researched and developed the application of speech recognition as an input modality for assistive technology (AT). Current interfaces to AT for people with severe physical disabilities, such as switch-scanning, can be prohibitively slow and tiring to use. Many people with severe physical disabilities also have some speech, though many also have poor control of speech articulators, leading to dysarthria. Nonetheless, recognition of dysarthric speech can give people more control options than using body movement alone. Speech can therefore be an attractive option for AT input.

Techniques that have been developed for optimising the recognition of dysarthric speech will be described, resulting in recognition rates of greater than 80% for people with even the most severe dysarthria. Speech recognition has been applied as a means of controlling the home (via an environmental control system) and, probably for the first time, as a means of controlling a communication aid. The development of the Voice Input Voice Output Communication Aid (VICOCA) will be described and some early results of its evaluation presented.

The talk will discuss some of the lessons learnt from these projects, such as:

  • The need to work in interdisciplinary teams including speech technologists, speech and language therapists, health researchers and assistive technologists.
  • The value of user-centred design, involving users in defining their wants and needs and then working with them, in an iterative manner, to refine the AT such that it becomes usable and acceptable.
  • The gap that exists between the results that can be achieved in the lab and those achievable in people’s homes under real usage conditions –something that is not often covered in research papers.
  • The practical approaches that can be applied to optimising recognition for individuals. It is often possible to make significant improvements in recognition rates by altering the configuration of the AT set-up.

The talk will conclude by describing some of the future potential applications of speech technology that are being developed, or considered, for people with disabilities as well as for frail older people and people with long-term conditions.