Summary: | It is very important to be able to deduce an unknown conclusion from one or several known premises when solving a logical inference. The existing inference methods or models have certain logical inference abilities. However, because of the diversity of the forms of problems and the complexity of the derivation process, the scope of applying these methods is limited; this means the inference results are not ideal. Therefore, this paper proposes a new neural network model to solve the logic inference problem found in calculus. By using the successive over relaxation (SOR) method and the principle of recurrent confidence, the recurrent confidence inference network (RCI-Net) is built to solve the inference problem. The network simulates the solving process of the inference problem. Based on the known premise of this problem, it is calculated step by step so that the result of the calculation becomes gradually closer to the answer. At the same time, to make RCI-Net have stronger logical inference ability, our team uses the half mean squared error (HMSE) to construct the loss function of the model, improving the training efficiency of the model and preventing training collapse caused by the loss value exceeding the system’s value range. Our team takes Sudoku reasoning problem as an example to carry out experiments. The results show that when the number of prompts of the reasoning problem is 17, the accuracy of the test set model can reach 99.67%, which is 3.07% higher than the existing models. It proves that the algorithm has better effect than the existing methods in solving logical reasoning problems.
|