Title

A New Replica Placement Policy For Hadoop Distributed File System

Keywords

Cloud Computing; Data Replication; Hadoop; Hadoop Distributed File System; Load Balance; MapReduce; Replica Placement

Abstract

Today, Hadoop Distributed File System (HDFS) is widely used to provide scalable and fault-tolerant storage of large volumes of data. One of the key issues that affect the performance of HDFS is the placement of data replicas. Although the current HDFS replica placement policy can achieve both fault tolerance and read/write efficiency, the policy cannot evenly distribute replicas across cluster nodes, and has to rely on load balancing utility to balance replica distributions. In this paper, we present a new replica placement policy for HDFS, which can generate replica distributions that are not only perfectly even but also meet all HDFS replica placement requirements.

Publication Date

6-30-2016

Publication Title

Proceedings - 2nd IEEE International Conference on Big Data Security on Cloud, IEEE BigDataSecurity 2016, 2nd IEEE International Conference on High Performance and Smart Computing, IEEE HPSC 2016 and IEEE International Conference on Intelligent Data and Security, IEEE IDS 2016

Number of Pages

262-267

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/BigDataSecurity-HPSC-IDS.2016.30

Socpus ID

84979790142 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/84979790142

This document is currently not available here.

Share

COinS