The ES Link commercialises not only the research which it has itself funded, but all related human-computer interaction (HCI) research at the University of Edinburgh, involving over 100 staff. Please contact the Link commercial director for further details.
A- Link-funded Projects
B- Proof of Concept
LINK-FUNDED PROJECTS / (in alphabetical order)
The following projects have been funded by the Link. Most projects last two to three years, and are jointly carried out in both Edinburgh and Stanford.
1- Alignment and Affect with Computers
2- Automating Call Centres
3- Collaborating Using Diagrams
4- Critical Agent Dialogue: Improving Software Agents
5- Improving Biomedical Discovery
6- Improving Speech Recognition
7- Paraphrase Analysis for Improved Generation
8- Prosody for Speech Synthesis
10- Reactive Planning for Dialogue Systems
12- Sounds of Discourse
13- Natural Clarification Strategies
Alignment and Affect with Computers
The project examines alignment between computers and users. How can computer-user alignment enhance the perceived quality and satisfaction of an interaction? The project aims to identify which aspects of alignment have the greatest impact on user satisfaction and performance.
Automating Call Centres
Automation of contact centre interactions is a realistic aim only if dialogue management technology is employed. Advanced dialogue technology will allow more flexible and natural interactions than current ‘form-based’ speech interfaces do. This project aims to consider business processing models when developing dialogue systems that can be used for the partial automation of contact centres.
Collaborating Using Diagrams
The aim of this project is to find out how people collaborate to perform a task that requires them both to talk and draw. How do they use these media to communicate, to focus their activity, to develop common goals and strategies?
Animated characters (sometimes called Avatars or software agents) have become common place in today’s hi-tech society. The CrAg project will investigate the way that people’s personalities affect the way they behave when talking to others, and how we can use this information to create more believable and attractive characters for use in internet applications.
Improving Speech Recognition
Despite progress in automatic speech recognition (ASR) technology, many applications beyond voice-trained dictation systems have remained elusive. This project describes speech in terms of articulatory acoustics, that describe sound in terms of how and where in the mouth it is produced, in order build a more accurate recognition system.
Paraphrase Analysis for Improved Generation
Even once they know what to say, people and machines have to choose the right way to say it – otherwise, at best they’ll sound awkward and at worst, they won’t be understood. This project, by enabling computers to say things in a more natural way, will have an immediate impact on synthesized voice developers, and natural language dialogue developers.
Prosody for Speech Synthesis
This project proposes to enhance components of the Festival speech synthesis system, developed by the University of Edinburgh and already used as a basis for several of the world’s leading synthesised speech developers, to improve the prosody (or intonation) of the voice.
The ROSIE project is using Active Learning (AL) techniques to reduce the cost of creating annotated text by applying statistical models to new corpora that are only partially annotated. This allows similar information to be extracted as if the new text had also been fully annotated, thus saving a large amount of time, effort and expense. Relevant to information extraction, business intelligence and knowledge management.
Reactive Planning for Dialogue Systems
Having a meaningful conversation with computers remains difficult primarily because the technology that tries to understand meaning – called a dialogue manager – often imposes a rigid set of expectations. This project is using reactive planning technology originally developed for robotics and adapting it for dialogue management. The system will be trialed to run an autonomous helicopter.
The aim of SEER is to explore techniques for the rapid development of automated entity recognition in new domains. Researchers have already created software that successfully identifies entities in relatively structured data, such as analysing text in a financial newspaper, but SEER is attempting to achieve similarly successful results in other domains that are less structured, such as resumes or archaeological reports, that present specific problems when trying to teach computers to learn.
Sounds of Discourse
The Sounds of Discourse project is using perceptual analysis to examine the way people emphasize certain words to create ‘added meaning’, primarily with a view to creating more realistic computer voices that can be used to create robust dialogue systems between computers and humans.
Natural Clarification Strategies
When people engage in dialogue, things often get misheard or a point is missed. When this happens, we need to clarify things to make sure we catch the correct meaning, usually by asking a question. This project would look at a variety of techniques to help machines clarify things when they mishear or don’t understand.
PROOF OF CONCEPT / (in alphabetical order)
These projects have received up to £200,000 in additional funding from Scottish Enterprise to investigate further potentially commercial invention. Additional information can be found here: www.scottish-enterprise.com/proofofconcept
1- Festival UniLex Lexicon
2- Personalised Electronic Museum Curator
Personalised Electronic Museum Curator
The M-PIRO (Multilingual Personalised Information Objects) project built a museum information system in three languages (English, Greek and Italian) that tailors exhibit descriptions according to three levels. It provides novices with the fundamentals, while at the same time offering the expert a more complex explanation of each piece. Furthermore, it remembers not to explain things twice.