Summary: | Abstract Citizen science has grown rapidly in popularity in recent years due to its potential to educate and engage the public while providing a means to address a myriad of scientific questions. However, the rise in popularity of citizen science has also been accompanied by concerns about the quality of data emerging from citizen science research projects. We assessed data quality in the online citizen scientist platform Chimp&See, which hosts camera trap videos of chimpanzees (Pan troglodytes) and other species across Equatorial Africa. In particular, we compared detection and identification of individual chimpanzees by citizen scientists with that of experts with years of experience studying those chimpanzees. We found that citizen scientists typically detected the same number of individual chimpanzees as experts, but assigned far fewer identifications (IDs) to those individuals. Those IDs assigned, however, were nearly always in agreement with the IDs provided by experts. We applied the data sets of citizen scientists and experts by constructing social networks from each. We found that both social networks were relatively robust and shared a similar structure, as well as having positively correlated individual network positions. Our findings demonstrate that, although citizen scientists produced a smaller data set based on fewer confirmed IDs, the data strongly reflect expert classifications and can be used for meaningful assessments of group structure and dynamics. This approach expands opportunities for social research and conservation monitoring in great apes and many other individually identifiable species.
|