Login
Facebook YouTube Twitter Linkedin
Facebook YouTube Twitter Linkedin

Blog

20 October 2020

Unlocking cognitive research during lockdown: post-webinar Q and A

The volume of discussion and questions during our recent webinar: Remote Cognitive Testing and Lessons Learned During Lockdown was great to see. Unfortunately we were unable to answer all the questions in the time we had so Dr Caroline Skirrow has put together a Q and A addressing some of the key themes raised.  

Thanks to all of you who joined in (or later caught up on) our webinar. In this webinar, Professor Victoria Leong and I discussed bringing cognitive testing home in the time of COVID-19, and the comparability of remote unsupervised or remote guided assessment versus traditional lab-based in-person assessments. For anyone who would like to see the recording again, share it with colleagues, or simply catch-up, the link is here:

Watch webinar

The volume of discussion and questions was great to see. Unfortunately we were unable to answer all questions that arose during our limited time. Instead, I have put together this Q and A blog to cover the key questions raised during our seminar. These are organised by theme. If you have any further questions at all, please reach out to me (caroline.skirrow@camcog.com) or the Cambridge Cognition support team.

If you are interested in looking further into the original paper that I discussed during the webinar, it is available and free to download here.

 

Quality control

Q – Is the software able to lock the browser so that participants are unable to open another window or tab while they are undergoing the assessment?

A – In CANTAB tests participants are asked to complete all assessments in full-screen mode. Where participants minimise the screen or tab away from the test screen, we automatically log a distraction event which gets recorded alongside the performance data. This allows for some quality control of the data as it gets collected, even without overt supervision. In our study looking at comparability between in-person and unsupervised web-based assessment this additional information allowed us to examine the effect of distraction and control for this effect in our analysis.

Q – Isn’t it possible to assume as a norm that the persons performing the tests during the test have the camera and sound turned on? Then any simple application can monitor: 1. sound level and quality; 2. facial expression of a person - thanks to which we have less automated information about the mental state; 3 other distractors, and add it as a metadata to the file of performance and can even group them according to defined rules.

A – There are a range of different levels of control and observation that would be possible in for remote assessments. What we presented in the webinar are two extremes of a continuum, one in which testing was wholly unsupervised and unmonitored, and another more labour intensive approach in which tests are guided and supervised. There are likely to be additional automated quality controls (such as sound monitoring, eye tracking) that could help to provide additional data regarding whether a participant is on task or not and support quality control. Currently, these additional metrics are not incorporated with our usual web-based CANTAB tests.

We have, however, recently developed Neurovocalix, a voice-based cognitive assessment platform, which delivers automated voice-based assessment and scoring. With voice assessments we routinely examine for background sound in our analyses, and we can derive certain vocal markers of mental state, which we are continuing to further develop. See our blog here.

 

Computer set-up and internet speed

Q – Could the problems with errors associated with latency be an artefact produced by differences in performance due to differences in network performance?

A – The CANTAB platform is designed to work in the same way regardless of the internet connection. Web-based CANTAB tests are resistant to low bandwidth by preloading or caching data, which is a real differentiator from other commonly used tests. Given internet speed does not affect reaction time measures on web-based CANTAB tests, it is likely to be the hardware latencies (random error) that are driving the variance in our study.

Q –  Some platforms provide scripts to capture info on remote participants' computer performance. Can web-based CANTAB capture similar computer performance data?

A – For every web based visit the platform on which the test is running (e.g. Windows) and version of the browser (e.g. Chrome) is collected as standard, and this data can be made available if required.

Q – Why did you choose to use the iPad in the lab and computer at home?

A – In the laboratory CANTAB is typically set up for use on iPads, and this is the standard equipment that we recommend for use in large in-person research studies and clinical trials. However, laptop and desktop computers make up a larger share of home computing technology than tablets at the current time. The study was set up to capture performance on the most typical equipment in its natural environment. However, of course this comes with differences in technology/response modality (mouse-click, track-pad or touchscreen press), which can have knock-on effects on response speed.

 

Practice effects

Q – How did you consider practice effects?

A – In our comparative study looking at in-person vs unsupervised web-based assessments, we used mixed effects models to examine equivalence between test settings (home vs. the lab). With these mixed effects models we are able to estimate and control for the effects of practice on our tests, whilst simultaneously examining the effect of test setting. In keeping with other test-retest studies, we did see practice effects on some of the test outcome measures, but because of the type of analysis we completed and the within subjects crossover design that we used, we are able to model these effects separately.

 

Use case in other populations

Q – How well does remote testing work with elderly populations and people with intellectual disability, memory problems or dementia?

A – Our experience is that older participants engage well with remote online assessments, and that the data produced is informative, and sensitive to impairment and clinical status.

For example, recent data from MOPEAD (Models of Patient Engagement in Alzheimer’s Disease) study, presented at the Alzheimer’s Association International Conference (AAIC) in 2019, looked at the ability of patients to complete screening assessments on CANTAB (Spatial Working Memory and Paired Associate Learning tests) across a range of countries, and showed that older adults were able and willing to engage in self-guided web-based assessments, and test performance was as would be expected in the age range recruited and assessed. We have more detail and a copy of the scientific poster in this earlier blog post.

So far, research suggests that test-retest reliability for CANTAB delivered via remote assessment are favourable, even in older and cognitively impaired participants. Maljkovic and colleagues (2019) reported research results from their study of healthy control participants with people who were cognitively impaired (Alzheimer’s Dementia or Mild Cognitive Impairment).  In this study, participants took part in cognitive tests on an iPad remotely at home. The authors showed moderate or higher reliability for 8 out of 11 CANTAB outcome measures, and found that outcome measures discriminated between healthy controls and cognitively impaired participants on 9 out of 11 measures. For more detail, have a look at the scientific poster presented at the Alzheimer’s Association International Conference (AAIC) here.

Q – My main concern with web-based assessment is the missing in-person qualitative observation patient performance and symptom manifestation during the test-taking. It may work for healthy controls but the same cannot be said of patient population. What are your thoughts?

A – The results of the study we presented in the webinar do not provide any information on comparability of test results across lab/web-based settings in clinical populations, and we do need to be careful about overgeneralising the results of this particular study. However, we can be confident that remote web-based assessment using CANTAB is possible in clinical populations, as was reported for participants with cognitive impairment (Alzheimer’s Dementia or Mild Cognitive Impairment) by Maljkovic and colleagues last year (click here for a link to the scientific poster). Interestingly, in this study those with cognitive impairment were assessed were required to have a study partner who monitored their use and charging of devices. This suggests that at least in some cases home-based support networks can help to support remote assessments. It is possible that with the right study design and participant and family/carer involvement these resources could be leveraged further to provide qualitative observations and symptom manifestation information during test taking.

Q – I was wondering how you deal with older adults having poor hearing or vision performing these tests?

A – All CANTAB tests contain in-built voiceover instructions, and during the tasks participants are asked to respond to visual stimuli. It is therefore important that participants wear glasses/hearing aids as needed before completing their assessments. For participants with more serious hearing and visual impairments, CANTAB may not be suitable, particularly in an unsupported web-based test arrangement.

Q – I'd like to know which remote validated methods are available for cancer related cognitive decline.

A – CANTAB has previously been used in a range of oncology studies, have a look here at previous blogs on cognitive safety in cancer trials, and on biological mechanisms in chemotherapy-associated cognitive impairments. Although we don’t have specific examples of CANTAB tests being used for remote oncology studies, there is good reason to assume that, like for other studies using remote CANTAB assessments to examine cognitive function in patients with cognitive impairment, remote assessment is likely to deliver sensitive and useful research outcomes. However, where reaction times are required as primary outcome measures careful consideration of study design and equipment may be needed.

 

Comparison with other computerised tests

Q – Have you compared comparability of web-based CANTAB against that from other computerised cognitive assessment platforms?

A – When looking at correlations between web-based and lab-based computerised cognitive testing, CANTAB performs very similarly to other tests on the market. This include batteries such as the NutriCog battery (Assmann et al., 2016), and the Amsterdam Cognition Scan (Feenstra, Murre, Vermeulen, Kieffer, & Schagen, 2017), and Cogstate test battery (Cromer et al., 2015).

Whilst at first glance the Cogstate test battery results appear to have much higher ICC coefficients, this is in fact the product of a much less conservative ICC form, which calculates ICC on the premise that scores are obtained from different raters. In our current study we assume that the CANTAB rater (the computer programme) remains the same at home or in the lab – this consistency is one we would see as being one of the strongest benefits from automated testing and test scoring. However, this produces a more conservative ICC coefficient. For example, for our two PAL outcome measures -Total Errors Adjusted, and First Attempt Memory Score- we report an ICC of 0.6 and 0.51 in our current study, on the basis of a single rating, absolute agreement, random effects model (ICC 2,1). If we had been using an average measure, absolute agreement, two-way random effects model as in the Cogstate study (Cromer et al., 2015), we would have reported an ICCs or 0.75 and 0.67, respectively.

It is also worth noting that studies of these other cognitive batteries have not examined agreement and equivalence in depth in the same way as we report in our paper. This means that whilst performance is correlated across these settings, this could have arisen in the context of differences in scale and variance as we discussed in our paper and during the webinar.

 

CANTAB around the world

Q – Can tests be translated in vernacular languages?

A – The majority of the CANTAB tasks are language- and culture-independent meaning that a subject’s cultural background should not influence performance on the tasks. The only exception is the Verbal Recognition Memory (VRM)  task which contains language-dependent stimuli.

In CANTAB Connect, the instructions are administered via an automated voiceover, which is currently available in over 45 languages. You can contact the Support Team for further details of which languages are currently supported for each CANTAB test.
Voiceover translations are available upon request. You can get in touch for a quote.

Q – Are the available norms applicable for Indian setting?

A – We have web-based assessment norms for populations residing primarily in the United Kingdom, the United States of America and continental Europe. We are currently in the middle of completing a large study collecting normative data from India using an in-person lab-based testing method. The interim results can be found here.

Q – Are there ethical considerations when having to test in different countries?

A – Whilst remote testing does make it easier to collect data across different sites and in different countries, study design, and data collection and management needs to be underpinned by the relevant legislature. Whilst we do not typically support with ethical approvals for human and animal studies, we can support when it comes to data management. National and international statute, law, or guidelines appropriate to our products and services and client requirements are followed appropriately. Our products, services and personnel supporting the business meet requirements for a range of international standards. You can contact the Support Team for further details and information.

 

Verbal assessment

Q – I am keen to know if any tests of verbal episodic memory can be delivered online?

A – Cambridge Cognition has recently developed a voice-based cognitive assessment platform for this very purpose. Our platform Neurovocalix delivers automated voice-based assessment and scoring at high accuracy levels, including the verbal paired associates test, a test of verbal episodic memory. Further information can be found in this blog.

 

References

Assmann, K. E., Bailet, M., Lecoffre, A. C., Galan, P., Hercberg, S., Amieva, H., & Kesse-Guyot, E. (2016). Comparison between a self-administered and supervised version of a web-based cognitive test battery: Results from the nutri net-santé cohort study. Journal of Medical Internet Research, 18(4), 1–13. https://doi.org/10.2196/jmir.4862

Cromer, J. A., Harel, B. T., Yu, K., Valadka, J. S., Brunwin, J. W., Crawford, C. D., … Maruff, P. (2015). Comparison of Cognitive Performance on the Cogstate Brief Battery When Taken In-Clinic, In-Group, and Unsupervised. Clinical Neuropsychologist, 29(4), 542–558. https://doi.org/10.1080/13854046.2015.1054437

Feenstra, H. E. M., Murre, J. M. J., Vermeulen, I. E., Kieffer, J. M., & Schagen, S. B. (2017). Reliability and validity of a self-administered tool for online neuropsychological testing: The Amsterdam Cognition Scan. Journal of Clinical and Experimental Neuropsychology, 00(00), 1–21. https://doi.org/10.1080/13803395.2017.1339017

Tags : web-based testing | cantab | cognitive testing | cognition

Author portrait

Dr Caroline Skirrow - Senior Scientist