Summary: | One challenge when conducting exercises in a cyber range is to know what applications and vulnerabilities are present on deployed computers. In this paper, the reliability of application-and vulnerability reporting by two vulnerability scanners, OpenVas and Nexpose, have been evaluated based on their accuracy and consistency. Followed by an experiment, the configurations on two virtual computers were varied in order to identify where each scanner gathers information. Accuracy was evaluated with the f1-score, which combines the precision and recall metric into a single number. Precision and recall values were calculated by comparing installed ap-plications and vulnerabilities on virtual computers with the scanning reports. Consistency was evaluated by quantifying how similar the reporting of applications and vulnerabilities between multiple vulnerability scans were into a number between 0 and 1. The vulnerabilities reported by both scanners were also combined with their union and intersection to increase the accuracy. The evaluation reveal that neither Nexpose or OpenVas accurately and consistently report installed applications and vulnerabilities. Nexpose reported vulnerabilities better than OpenVas with an accuracy of 0.78. Nexpose also reported applications more accurately with an accuracy of 0.96. None of the scanners reported both applications and vulnerabilities consistently over three vulnerability scans. By taking the union of the reported vulnerabilities by both scanners, the accuracy increased by 8 percent compared with the accuracy of Nexpose alone. However, our conclusion is that the scanners’ reporting does not perform well enough to be used for a reliable inventory of applications and vulnerabilities in a cyber range.
|