Smart Data Placement Strategy in Heterogeneous Hadoop

Big Data Data Placement Hadoop HDFS Heterogeneous Cluster.

Authors

  • Nour-Eddine Bakni
    ne.bk.info@gmail.com
    LIS Lab, Faculty of Sciences, University Hassan II of Casablanca, Casablanca,, Morocco
  • Ismail Assayad LIS Lab, Faculty of Sciences, ENSEM, University Hassan II of Casablanca, Casablanca,, Morocco

Downloads

Big Data platforms are becoming increasingly essential these days, given the volume of data generated every moment by millions of people around the world. The Hadoop framework is a solution that allows storing and processing these large amounts of data in parallel on a cluster of machines. The default data placement strategy adopted by the Hadoop Distributed File System (HDFS), initially designed for a homogeneous cluster where all machines are considered identical, relies on distributing data to nodes based only on their disk space availability. Implementing this strategy in a heterogeneous environment, where nodes have varying computing or disk storage capacities, may result in performance degradation. In this paper, we propose a smart data placement strategy (SDPS) in heterogeneous Hadoop clusters that aims to place high-access data on high-performance nodes. It takes cluster heterogeneity into account when distributing data by first dividing nodes into groups based on their performance levels using a clustering algorithm and then allocating data blocks to appropriate nodes based on their hotness. SDPS also allows dynamically specifying the replication factor of data blocks to reduce storage space waste while maintaining data availability. Experimental results show that SDPS is more efficient in a heterogeneous environment compared with the default data placement policy of HDFS, and it improves MapReduce data processing, data locality, and storage efficiency.

 

Doi: 10.28991/HIJ-2025-06-01-03

Full Text: PDF