Summary: | This article shares the problem-solving process and resultant rapid sensemaking methodology created by an interdisciplinary research team faced with qualitative “big data.” Confronted with a data set of over half a million free text comments, within an existing data set of 320,500 surveys, our team developed a process to structure the naturally occurring variability within the data, to identify and isolate meaningful analytic units, and to group subsets of our data amenable to automated coding using a template-based process. This allowed a significant portion of the data to be rapidly assessed while still preserving the ability to explore the more complex free text comments with a grounded theory informed emergent process. In this discussion, we focus on strategies useful to other teams interested in fielding open-ended questions as part of large survey efforts and incorporating those findings as part of an integrated analysis.
|