Best Trade-Off Point Method for Efficient Resource Provisioning in Spark
Considering the recent exponential growth in the amount of information processed in Big Data, the high energy consumed by data processing engines in datacenters has become a major issue, underlining the need for efficient resource allocation for more energy-efficient computing. We previously propose...
Main Author: | Peter P. Nghiem |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2018-11-01
|
Series: | Algorithms |
Subjects: | |
Online Access: | https://www.mdpi.com/1999-4893/11/12/190 |
Similar Items
-
Time Estimation and Resource Minimization Scheme for Apache Spark and Hadoop Big Data Systems With Failures
by: Jinbae Lee, et al.
Published: (2019-01-01) -
Large Scale Implementations for Twitter Sentiment Classification
by: Andreas Kanavos, et al.
Published: (2017-03-01) -
A comprehensive performance analysis of Apache Hadoop and Apache Spark for large scale data sets using HiBench
by: N. Ahmed, et al.
Published: (2020-12-01) -
Comparative Analysis of Skew-Join Strategies for Large-Scale Datasets with MapReduce and Spark
by: Cao, H.-P, et al.
Published: (2022) -
分散式計算系統及巨量資料處理架構設計-基於YARN, Storm及Spark
by: 曾柏崴, et al.