Summary: | The application of virtual reality technology in science experiment education is a research with practical significance and value in human-computer interaction. However, in some existing education tools based on virtual reality, due to the single interaction mode, the complexity of user intention and the non-physical interaction characteristics brought by virtualization, their experimental teaching ability is limited, resulting in the lack of practical value and popularity. In order to solve these problems, a multimodal interaction model is constructed by fusing gesture, speech and pressure information. Specifically, our tasks include: 1) collecting user input information and time series information to construct basic data input tuples. 2) The basic interaction information is used to identify the user's basic intention, and the correlation degree between the user's intentions is considered to determine the correctness of the current identification intention. 3) It allows users to alternate between multi-channel and single channel interaction. Based on this model, we build a multi-modal intelligent interactive virtual experiment platform (MIIVEP), and design and implement a kind of dropper with strong perception ability, which has been verified, tested, evaluated and applied in the intelligent virtual experiment system. In addition, in order to evaluate this work more effectively, we developed a fair scoring criterion for the virtual experimental system (Evaluation scale of virtual experiment system, ESVES), and invited middle school teachers and students to participate in the verification of the results of this work. Through the user's actual use effect verification and result research, we prove the effectiveness of the proposed model and the corresponding implementation.
|