Estimating runtime of a job in Hadoop MapReduce

Abstract Hadoop MapReduce is a framework to process vast amounts of data in the cluster of machines in a reliable and fault-tolerant manner. Since being aware of the runtime of a job is crucial to subsequent decisions of this platform and being better management, in this paper we propose a new metho...

Full description

Bibliographic Details
Main Authors: Narges Peyravi, Ali Moeini
Format: Article
Language:English
Published: SpringerOpen 2020-07-01
Series:Journal of Big Data
Subjects:
Online Access:http://link.springer.com/article/10.1186/s40537-020-00319-4

Similar Items