Summary: | <p> Choosing specific implementational details is one of the most important aspects of creating and evaluating a model. In order to properly model cognitive processes, choices for these details must be made based on empirical research. Unfortunately, modelers are often forced to make decisions in the absence of relevant data. My work investigates the effects of these decisions. Looking at infant speech segmentation, I incorporate empirical research into model choices regarding model input, inference, and evaluation. First, I use experimental results to argue for syllables as a basic unit for early segmentation and show that the segmentation task is less difficult than previously thought. I then explore the role of various inference algorithms, each of which produces testable predictions. Lastly, I argue that standard methods of model evaluation make unrealistic assumptions about the goal of learning. Evaluating models in terms of their ability to support additional learning tasks shows that gold standard performance alone is an insufficient metric for measuring segmentation quality. In each of these three instances, I treat model design decisions as free parameters whose impact must be evaluated. By following this approach, future researchers can better gauge the success or failure of cognitive models. </p>
|