Summary: | A key goal in machine vision is to understand how the actions of sentient agents such as humans are processed, identified, and understood. The most apparent challenge is the need to segment a continuous set of visual movements into meaningful discrete actions. Part of the work of the intentional vision research was to detect a set of determining features exhibited by human participants that account for the selection of significant action boundaries as judged by human raters. They found that action boundaries could be identified by a set of sub-actions such as hand-to-object contacts, object-to-object contacts, occlusions, and eye movements. Our goal was to create a cost effective vision system to be an easy-to-use tool for training and tracking to aid in analysis of video recordings of experiments for non-vision specialists. The system was validated for human motion analysis by applying it in conjunction with psychological studies performed with the intentional vision research. The results show correlation with the human rater data gathered from the intentional vision research showing that the cues observed in the intentional vision research are captured in our behavior feature vector. The system was extended to perform autonomous segmentation and analysis for motion studies to expand the possibility of interdisciplinary use. Of the 100 videos collected, 84 were successfully segmented and analyzed without intervention. The autonomous system was also shown to yield good results in natural scene segmentation.
|