Successive Over Relaxation Recurrent Confidence Inference Network Based on Linear Extrapolation
It is very important to be able to deduce an unknown conclusion from one or several known premises when solving a logical inference. The existing inference methods or models have certain logical inference abilities. However, because of the diversity of the forms of problems and the complexity of the...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9521871/ |
id |
doaj-26c648dae7d74d85a31d7bd1a6666f9c |
---|---|
record_format |
Article |
spelling |
doaj-26c648dae7d74d85a31d7bd1a6666f9c2021-08-30T23:00:48ZengIEEEIEEE Access2169-35362021-01-01911834611835610.1109/ACCESS.2021.31077199521871Successive Over Relaxation Recurrent Confidence Inference Network Based on Linear ExtrapolationWenkai Huang0https://orcid.org/0000-0003-3111-7511Yihao Xue1https://orcid.org/0000-0002-3310-4864Zefeng Xu2https://orcid.org/0000-0003-0001-4034Lingkai Hu3https://orcid.org/0000-0003-3165-8799Center for Research on Leading Technology of Special Equipment, School of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou, ChinaSchool of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou, ChinaSchool of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou, ChinaSchool of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou, ChinaIt is very important to be able to deduce an unknown conclusion from one or several known premises when solving a logical inference. The existing inference methods or models have certain logical inference abilities. However, because of the diversity of the forms of problems and the complexity of the derivation process, the scope of applying these methods is limited; this means the inference results are not ideal. Therefore, this paper proposes a new neural network model to solve the logic inference problem found in calculus. By using the successive over relaxation (SOR) method and the principle of recurrent confidence, the recurrent confidence inference network (RCI-Net) is built to solve the inference problem. The network simulates the solving process of the inference problem. Based on the known premise of this problem, it is calculated step by step so that the result of the calculation becomes gradually closer to the answer. At the same time, to make RCI-Net have stronger logical inference ability, our team uses the half mean squared error (HMSE) to construct the loss function of the model, improving the training efficiency of the model and preventing training collapse caused by the loss value exceeding the system’s value range. Our team takes Sudoku reasoning problem as an example to carry out experiments. The results show that when the number of prompts of the reasoning problem is 17, the accuracy of the test set model can reach 99.67%, which is 3.07% higher than the existing models. It proves that the algorithm has better effect than the existing methods in solving logical reasoning problems.https://ieeexplore.ieee.org/document/9521871/Calculating logical inferencesuccessive over relaxationrecurrent confidencehalf mean squared errordeep learning |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Wenkai Huang Yihao Xue Zefeng Xu Lingkai Hu |
spellingShingle |
Wenkai Huang Yihao Xue Zefeng Xu Lingkai Hu Successive Over Relaxation Recurrent Confidence Inference Network Based on Linear Extrapolation IEEE Access Calculating logical inference successive over relaxation recurrent confidence half mean squared error deep learning |
author_facet |
Wenkai Huang Yihao Xue Zefeng Xu Lingkai Hu |
author_sort |
Wenkai Huang |
title |
Successive Over Relaxation Recurrent Confidence Inference Network Based on Linear Extrapolation |
title_short |
Successive Over Relaxation Recurrent Confidence Inference Network Based on Linear Extrapolation |
title_full |
Successive Over Relaxation Recurrent Confidence Inference Network Based on Linear Extrapolation |
title_fullStr |
Successive Over Relaxation Recurrent Confidence Inference Network Based on Linear Extrapolation |
title_full_unstemmed |
Successive Over Relaxation Recurrent Confidence Inference Network Based on Linear Extrapolation |
title_sort |
successive over relaxation recurrent confidence inference network based on linear extrapolation |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2021-01-01 |
description |
It is very important to be able to deduce an unknown conclusion from one or several known premises when solving a logical inference. The existing inference methods or models have certain logical inference abilities. However, because of the diversity of the forms of problems and the complexity of the derivation process, the scope of applying these methods is limited; this means the inference results are not ideal. Therefore, this paper proposes a new neural network model to solve the logic inference problem found in calculus. By using the successive over relaxation (SOR) method and the principle of recurrent confidence, the recurrent confidence inference network (RCI-Net) is built to solve the inference problem. The network simulates the solving process of the inference problem. Based on the known premise of this problem, it is calculated step by step so that the result of the calculation becomes gradually closer to the answer. At the same time, to make RCI-Net have stronger logical inference ability, our team uses the half mean squared error (HMSE) to construct the loss function of the model, improving the training efficiency of the model and preventing training collapse caused by the loss value exceeding the system’s value range. Our team takes Sudoku reasoning problem as an example to carry out experiments. The results show that when the number of prompts of the reasoning problem is 17, the accuracy of the test set model can reach 99.67%, which is 3.07% higher than the existing models. It proves that the algorithm has better effect than the existing methods in solving logical reasoning problems. |
topic |
Calculating logical inference successive over relaxation recurrent confidence half mean squared error deep learning |
url |
https://ieeexplore.ieee.org/document/9521871/ |
work_keys_str_mv |
AT wenkaihuang successiveoverrelaxationrecurrentconfidenceinferencenetworkbasedonlinearextrapolation AT yihaoxue successiveoverrelaxationrecurrentconfidenceinferencenetworkbasedonlinearextrapolation AT zefengxu successiveoverrelaxationrecurrentconfidenceinferencenetworkbasedonlinearextrapolation AT lingkaihu successiveoverrelaxationrecurrentconfidenceinferencenetworkbasedonlinearextrapolation |
_version_ |
1721184744134148096 |