Automatic, Qualitative Scoring of the Interlocking Pentagon Drawing Test (PDT) Based on U-Net and Mobile Sensor Data
We implemented a mobile phone application of the pentagon drawing test (PDT), called mPDT, with a novel, automatic, and qualitative scoring method for the application based on U-Net (a convolutional network for biomedical image segmentation) coupled with mobile sensor data obtained with the mPDT. Fo...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-02-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/20/5/1283 |
id |
doaj-6ca6dd0d82624b3588b4a6f20190c207 |
---|---|
record_format |
Article |
spelling |
doaj-6ca6dd0d82624b3588b4a6f20190c2072020-11-25T01:19:53ZengMDPI AGSensors1424-82202020-02-01205128310.3390/s20051283s20051283Automatic, Qualitative Scoring of the Interlocking Pentagon Drawing Test (PDT) Based on U-Net and Mobile Sensor DataIngyu Park0Yun Joong Kim1Yeo Jin Kim2Unjoo Lee3Department of Electronic Engineering, Hallym University, Chuncheon 24252, KoreaDepartment of Neurology, Hallym University Sacred Heart Hospital, Hallym University College of Medicine, Hallym University, Anyang 14068, KoreaDepartment of Neurology, Chuncheon Sacred Heart hospital, Hallym University College of Medicine, Chuncheon 24252, KoreaDepartment of Electronic Engineering, Hallym University, Chuncheon 24252, KoreaWe implemented a mobile phone application of the pentagon drawing test (PDT), called mPDT, with a novel, automatic, and qualitative scoring method for the application based on U-Net (a convolutional network for biomedical image segmentation) coupled with mobile sensor data obtained with the mPDT. For the scoring protocol, the U-Net was trained with 199 PDT hand-drawn images of 512 × 512 resolution obtained via the mPDT in order to generate a trained model, Deep5, for segmenting a drawn right or left pentagon. The U-Net was also trained with 199 images of 512 × 512 resolution to attain the trained model, DeepLock, for segmenting an interlocking figure. Here, the epochs were iterated until the accuracy was greater than 98% and saturated. The mobile senor data primarily consisted of x and y coordinates, timestamps, and touch-events of all the samples with a 20 ms sampling period. The velocities were then calculated using the primary sensor data. With Deep5, DeepLock, and the sensor data, four parameters were extracted. These included the number of angles (0−4 points), distance/intersection between the two drawn figures (0−4 points), closure/opening of the drawn figure contours (0−2 points), and tremors detected (0−1 points). The parameters gave a scaling of 11 points in total. The performance evaluation for the mPDT included 230 images from subjects and their associated sensor data. The results of the performance test indicated, respectively, a sensitivity, specificity, accuracy, and precision of 97.53%, 92.62%, 94.35%, and 87.78% for the number of angles parameter; 93.10%, 97.90%, 96.09%, and 96.43% for the distance/intersection parameter; 94.03%, 90.63%, 92.61%, and 93.33% for the closure/opening parameter; and 100.00%, 100.00%, 100.00%, and 100.00% for the detected tremor parameter. These results suggest that the mPDT is very robust in differentiating dementia disease subtypes and is able to contribute to clinical practice and field studies.https://www.mdpi.com/1424-8220/20/5/1283pentagon drawing testautomatic scoringmobile sensordeep learningu-netparkinson’s disease |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Ingyu Park Yun Joong Kim Yeo Jin Kim Unjoo Lee |
spellingShingle |
Ingyu Park Yun Joong Kim Yeo Jin Kim Unjoo Lee Automatic, Qualitative Scoring of the Interlocking Pentagon Drawing Test (PDT) Based on U-Net and Mobile Sensor Data Sensors pentagon drawing test automatic scoring mobile sensor deep learning u-net parkinson’s disease |
author_facet |
Ingyu Park Yun Joong Kim Yeo Jin Kim Unjoo Lee |
author_sort |
Ingyu Park |
title |
Automatic, Qualitative Scoring of the Interlocking Pentagon Drawing Test (PDT) Based on U-Net and Mobile Sensor Data |
title_short |
Automatic, Qualitative Scoring of the Interlocking Pentagon Drawing Test (PDT) Based on U-Net and Mobile Sensor Data |
title_full |
Automatic, Qualitative Scoring of the Interlocking Pentagon Drawing Test (PDT) Based on U-Net and Mobile Sensor Data |
title_fullStr |
Automatic, Qualitative Scoring of the Interlocking Pentagon Drawing Test (PDT) Based on U-Net and Mobile Sensor Data |
title_full_unstemmed |
Automatic, Qualitative Scoring of the Interlocking Pentagon Drawing Test (PDT) Based on U-Net and Mobile Sensor Data |
title_sort |
automatic, qualitative scoring of the interlocking pentagon drawing test (pdt) based on u-net and mobile sensor data |
publisher |
MDPI AG |
series |
Sensors |
issn |
1424-8220 |
publishDate |
2020-02-01 |
description |
We implemented a mobile phone application of the pentagon drawing test (PDT), called mPDT, with a novel, automatic, and qualitative scoring method for the application based on U-Net (a convolutional network for biomedical image segmentation) coupled with mobile sensor data obtained with the mPDT. For the scoring protocol, the U-Net was trained with 199 PDT hand-drawn images of 512 × 512 resolution obtained via the mPDT in order to generate a trained model, Deep5, for segmenting a drawn right or left pentagon. The U-Net was also trained with 199 images of 512 × 512 resolution to attain the trained model, DeepLock, for segmenting an interlocking figure. Here, the epochs were iterated until the accuracy was greater than 98% and saturated. The mobile senor data primarily consisted of x and y coordinates, timestamps, and touch-events of all the samples with a 20 ms sampling period. The velocities were then calculated using the primary sensor data. With Deep5, DeepLock, and the sensor data, four parameters were extracted. These included the number of angles (0−4 points), distance/intersection between the two drawn figures (0−4 points), closure/opening of the drawn figure contours (0−2 points), and tremors detected (0−1 points). The parameters gave a scaling of 11 points in total. The performance evaluation for the mPDT included 230 images from subjects and their associated sensor data. The results of the performance test indicated, respectively, a sensitivity, specificity, accuracy, and precision of 97.53%, 92.62%, 94.35%, and 87.78% for the number of angles parameter; 93.10%, 97.90%, 96.09%, and 96.43% for the distance/intersection parameter; 94.03%, 90.63%, 92.61%, and 93.33% for the closure/opening parameter; and 100.00%, 100.00%, 100.00%, and 100.00% for the detected tremor parameter. These results suggest that the mPDT is very robust in differentiating dementia disease subtypes and is able to contribute to clinical practice and field studies. |
topic |
pentagon drawing test automatic scoring mobile sensor deep learning u-net parkinson’s disease |
url |
https://www.mdpi.com/1424-8220/20/5/1283 |
work_keys_str_mv |
AT ingyupark automaticqualitativescoringoftheinterlockingpentagondrawingtestpdtbasedonunetandmobilesensordata AT yunjoongkim automaticqualitativescoringoftheinterlockingpentagondrawingtestpdtbasedonunetandmobilesensordata AT yeojinkim automaticqualitativescoringoftheinterlockingpentagondrawingtestpdtbasedonunetandmobilesensordata AT unjoolee automaticqualitativescoringoftheinterlockingpentagondrawingtestpdtbasedonunetandmobilesensordata |
_version_ |
1725136654499315712 |