Truth in advertising: Reporting performance of computer programs, algorithms and the impact of architecture
The level of detail and precision that appears in the experimental methodology section computer science papers is usually much less than in natural science disciplines. This is partially justified by different nature of experiments. The experimental evidence presented here shows that the...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
South African Institute of Computer Scientists and Information Technologists
2010-11-01
|
Series: | South African Computer Journal |
Subjects: | |
Online Access: | http://sacj.cs.uct.ac.za/index.php/sacj/article/view/50 |
Summary: | The level of detail and precision that appears in the experimental
methodology section computer science papers is usually much less
than in natural science disciplines. This is partially justified by
different nature of experiments. The experimental evidence presented
here shows that the time taken by the same algorithm varies so
significantly on different CPUs that without knowing the exact model
of CPU, it is difficult to compare the results. This is placed in
context by analysing a cross-section of experimental results
reported in the literature. The reporting of experimental results
is sometimes insufficient to allow experiments to be replicated,
and in some case is insufficient to support the claims made for the
algorithms. New standards for reporting on algorithms results are
suggested. |
---|---|
ISSN: | 1015-7999 2313-7835 |