Machine Learning Classification of Facial Affect Recognition Deficits after Traumatic Brain Injury for Informing Rehabilitation Needs and Progress

Indiana University-Purdue University Indianapolis (IUPUI) === A common impairment after a traumatic brain injury (TBI) is a deficit in emotional recognition, such as inferences of others’ intentions. Some researchers have found these impairments in 39\% of the TBI population. Our research informatio...

Full description

Bibliographic Details
Main Author: Iffat Naz, Syeda
Other Authors: Christopher, Lauren
Language:en_US
Published: 2021
Subjects:
ToM
TBI
SVM
RF
Online Access:http://hdl.handle.net/1805/24774
Description
Summary:Indiana University-Purdue University Indianapolis (IUPUI) === A common impairment after a traumatic brain injury (TBI) is a deficit in emotional recognition, such as inferences of others’ intentions. Some researchers have found these impairments in 39\% of the TBI population. Our research information needed to make inferences about emotions and mental states comes from visually presented, nonverbal cues (e.g., facial expressions or gestures). Theory of mind (ToM) deficits after TBI are partially explained by impaired visual attention and the processing of these important cues. This research found that patients with deficits in visual processing differ from healthy controls (HCs). Furthermore, we found visual processing problems can be determined by looking at the eye tracking data developed from industry standard eye tracking hardware and software. We predicted that the eye tracking data of the overall population is correlated to the TASIT test. The visual processing of impaired (who got at least one answer wrong from TASIT questions) and unimpaired (who got all answer correctly from TASIT questions) differs significantly. We have divided the eye-tracking data into 3 second time blocks of time series data to detect the most salient individual blocks to the TASIT score. Our preliminary results suggest that we can predict the whole population's impairment using eye-tracking data with an improved f1 score from 0.54 to 0.73. For this, we developed optimized support vector machine (SVM) and random forest (RF) classifier.