A multi-modal device for application in microsleep detection

Microsleeps and other lapses of responsiveness can have severe, or even fatal, consequences for people who must maintain high levels of attention on monotonous tasks for long periods of time, e.g., commercial vehicle drivers, pilots, and air-traffic controllers. This thesis describes a head-mounted...

Full description

Bibliographic Details
Main Author: Knopp, Simon James
Language:en
Published: University of Canterbury. Electrical and Computer Engineering 2015
Subjects:
Online Access:http://hdl.handle.net/10092/10408
Description
Summary:Microsleeps and other lapses of responsiveness can have severe, or even fatal, consequences for people who must maintain high levels of attention on monotonous tasks for long periods of time, e.g., commercial vehicle drivers, pilots, and air-traffic controllers. This thesis describes a head-mounted system which is the first prototype in the process of creating a system that can detect (and possibly predict) these lapses in real time. The system consists of a wearable device which captures multiple physiological signals from the wearer and an extensible software framework for imple- menting signal processing algorithms. Proof-of-concept algorithms are implemented and used to demonstrate that the system can detect simulated microsleeps in real time. The device has three sensing modalities in order to get a better estimate of the user's cognitive state than by any one alone. Firstly, it has 16 channels of EEG (8 currently in use) captured by 24-bit ADCs sampling at 250 Hz. The EEG is acquired by custom-built dry electrodes consisting of spring-loaded, gold-plated pins. Secondly, the device has a miniature video camera mounted below one eye, providing 320 x 240 px greyscale video of the eye at 60 fps. The camera module includes infrared illumination so that it can operate in the dark. Thirdly, the device has a six-axis IMU to measure the orientation and movement of the head. These sensors are connected to a Gumstix computer-on-module which transmits the captured data to a remote computer via Wi-Fi. The device has a battery life of about 7.4 h. In addition to this hardware, software to receive and analyse data from the head-mounted device was developed. The software is built around a signal processing pipeline that has been designed to encapsulate a wide variety of signal processing algorithms; feature extractors calculate salient properties of the input data and a classifier fuses these features to determine the user's cognitive state. A plug-in system is provided which allows users to write their own signal processing algorithms and to experiment with different combinations of feature extractors and classifiers. Because of this flexible modular design, the system could also be used for applications other than lapse detection‒any application which monitors EEG, eye video, and head movement can be implemented by writing appropriate signal processing plug-ins, e.g., augmented cognition or passive BCIs. The software also provides the ability to configure the device's hardware, to save data to disk, and to monitor the system in real time. Plug-ins can be implemented in C++ or Python. A series of validation tests were carried out to confirm that the system operates as intended. Most of the measured parameters were within the expected ranges: EEG amplifier noise = 0.14 μVRMS input-referred, EEG pass band = DC to 47 Hz, camera focus = 2.4 lp/mm at 40 mm, and total latency < 100 ms. Some parameters were worse than expected but still sufficient for effective operation: EEG amplifier CMRR ≥ 82 dB, EEG cross-talk = -17.4 dB, and IMU sampling rate = 10 Hz. The contact impedance of the dry electrodes, measured to be several hundred kilohms, was too high to obtain clean EEG. Three small-scale experiments were done to test the performance of the device in operation on people. The first two demonstrated that the pupil localization algorithm produces PERCLOS values close to those from a manually-rated gold standard and is robust to changes in ambient light levels, iris colour, and the presence of glasses. The final experiment demonstrated that the system is capable of capturing all three physiological signals, transmitting them to the remote computer in real time, extracting features from each signal, and classifying simulated microsleeps from the extracted features. However, this test was successful only when using conventional wet EEG electrodes instead of the dry electrodes built into the device; it will be necessary to find replacement dry electrodes for the device to be useful. The device and associated software form a platform which other researchers can use to develop algorithms for lapse detection. This platform provides data capture hardware and abstracts away the low-level software details so that other researchers are free to focus solely on developing signal processing techniques. In this way, we hope to enable progress towards a practical real-time, real-world lapse detection system.