Summary: | In this dissertation, I examine the role that intentional descriptions play in our scientific study of the mind. Behavioural scientists often use intentional language in their characterization of cognitive systems, making reference to “beliefs”, “representations”, or “states of information”. What is the scientific value gained from employing such intentional terminology?
I begin the dissertation by contrasting intentional descriptions with mechanistic descriptions, as these are the descriptions most commonly used to provide explanations in the behavioural sciences. I then examine the way that intentional descriptions are employed in various scientific contexts. I conclude that while mechanistic descriptions characterize the underlying structure of systems, intentional descriptions allow us to generate predictions of systems while remaining agnostic as to their mechanistic underpinnings.
Having established this, I then argue that intentional descriptions share much in common with statistical models in the way they characterize systems. Given these similarities, I theorize that intentional descriptions are employed within scientific practice as a particular type of phenomenological model. Phenomenological models are used to study, characterize, and predict the phenomena produced by mechanistic systems without describing their underlying structure. I demonstrate why such models are integral to our scientific discovery, and understanding, of the mechanisms that make up the brain.
With my account on the table, I then look back at previous accounts of intentional language that philosophers have offered in the past. I highlight insights that each brought to our understanding of intentional language, and point out where each ultimately goes astray.
I conclude the dissertation by examining the ontological implications of my theory. I demonstrate that my account is compatible with versions of both realism, and anti-realism, regarding the existence of intentional states.
|