18 August 2020
Accuracy of automated scoring of verbal paired associates in a remote data collection context
At the virtual AAIC 2020 conference Dr Francesca Cormack shared her research, focusing on automatic speech recognition for verbal cognitive testing.
Automatic speech recognition (ASR) technology has improved vastly over the past 20 years, but particularly over the past five, reaching levels of accuracy that are considered to be consistent with human levels in the industry standard benchmark corpora. However, when we take these ASR systems and use them in unconstrained settings, we often see levels of accuracy that are notably worse than that.
In an effort to address this, Director of Research & Innovation - Dr Francesca Cormack’s research has focused on better understanding what limits there are when using ASR to collect verbal cognitive data.
- Participants were tested remotely on their own devices via the NeuroVocalix web-based platform
- All participants were native English speakers with no self-reported neurological or cognitive impairment
- Self-enrolled and completed a cognitive battery (Word Pairs, Digit Span and Serial subtraction) ePRO measures, demographic reporting
- Automated on-line scoring in real time of word-pair responses using proprietary ASR-based technology
- Data encrypted and stored on secure servers for offline analysis
- 150 word-pairs sessions manually scored by two trained raters, who noted technical and participant issues which could impact ASR accuracy
- Advances in ASR Technology make remote verbal testing feasible
- Excellent agreement with manual scoring supports the validity of this testing method
- Scoring accuracy not affected by age, education or device used
- No significant difference between US and UK born populations
- Background noise is a strong contributor to errors in ASR scoring – automated checks may be useful in future work
- These methods could be more useful in a remote trial context
Director of Research & Innovation, Cambridge Cognition - Dr Francesca Cormack