Summary: | This study presents a method for predicting location classes of a room such as a kitchen, and restroom, where a user is located by discovering location-specific sensor data motifs in sensor data observed by user's sensor devices, such as smartwatch, without requiring labeled training data collected in a target environment. For example, we can observe similar waveforms corresponding to kitchen knife chopping actions using body-worn accelerometers in kitchens and can also observe similar sound features by active sound probing in bathrooms because of their water-resistant walls. This indicates that such location-specific sensor data motifs can be inherent information for location class prediction in almost every environment. This study proposes a novel method that automatically detects location-specific motifs from time series sensor data by calculating a score that represents the 'location specificity' of each motif in a time series. Previous studies on location class prediction assume that location-specific sensor data are always observed in a room or use handcrafted rules and templates to detect location-specific sensor data resulting in difficulties in applying them to several realistic environments. In contrast, our method, named IndoLabel, can automatically discover short sensor data motifs, specific to a location class, and can automatically build an environment-independent location classifier without requiring handcrafted rules and templates. The proposed method was evaluated in real house environments using leave-one-environment-out cross-validation and achieved a state-of-The-Art performance although labeled training data in the target environment was unavailable. © 2001-2012 IEEE.
|