Summary: | This thesis details the development and evaluation of a new photofitting approach. The motivation for this work is that current photofit systems used by the police - whether manual or computerized - do not appear to work very well. Part of the problem with these approaches is they involve a single facial representation that necessitates a verbal interaction. When a multiple presentation is considered, our innate ability to recognize faces is capitalized (and the potentially disruptive effect of the verbal component is reduced). The approach works by employing Genetic Algorithms to evolve a small group of faces to be more like a desired target. The main evolutionary influence is via user input that specifies the similarity of the presented images with the target under construction. The thesis follows three main phases of development. The first involves a simple system modelling the internal components of a face (eyes, eyebrows, nose and mouth) containing features in a fixed relationship with each other. The second phase applies external facial features (hair and ears) along with an appropriate head shape and changes in the relationship between features. That the underlying model is based on Principal Components Analysis captures the statistics of how faces vary in terms of shading, shape and the relationship between features. Modelling was carried out in this way to create more realistic looking photofits and to guard against implausible featural relationships possible with traditional approaches. The encouraging results of these two sections prompted the development of a full photofit system: EvoFIT. This software is shown to have continued promise both in the lab and in a real case. Future work is directed particularly at resolving issues concerning the anonymity of the database faces and the creation of photofits from the subject's memory of a target.
|