Android-based customizable media crowdsourcing toolkit for machine vision research
Smart devices have become more complex and powerful, increasing in both computational power, storage capacities, and battery longevity. Currently available online facial recognition databases do not offer training datasets with enough contextually descriptive metadata for novel scenarios such as usi...
Main Author: | |
---|---|
Format: | Dissertation |
Language: | English |
Published: |
University of Oulu
2018
|
Subjects: | |
Online Access: | http://urn.fi/URN:NBN:fi:oulu-201812063247 http://nbn-resolving.de/urn:nbn:fi:oulu-201812063247 |
Summary: | Smart devices have become more complex and powerful, increasing in both computational power, storage capacities, and battery longevity. Currently available online facial recognition databases do not offer training datasets with enough contextually descriptive metadata for novel scenarios such as using machine vision to detect if people in a video like each other based on their facial expressions. The aim of this research is to design and implement a software tool to enable researchers to collect videos from a large pool of people through crowdsourcing means for machine vision analysis. We are particularly interested in the tagging of the videos with the demographic data of study participants as well as data from custom post hoc survey. This study has demonstrated that smart devices and their embedded technologies can be utilized to collect videos as well as self-evaluated metadata through crowdsourcing means. The application makes use of sensors embedded within smart devices such as the camera and GPS sensors to collect videos, survey data, and geographical data. User engagement is encouraged using periodic push notifications. The collected videos and metadata using the application will be used in the future for machine vision analysis of various phenomena such as investigating if machine vision could be used to detect people’s fondness for each other based on their facial expressions and self-evaluated post-task survey data. |
---|