THE USE OF ROUGH CLASSIFICATION AND TWO THRESHOLD TWO DIVISORS FOR DEDUPLICATION
The data deduplication technique efficiently reduces and removes redundant data in big data storage systems. The main issue is that the data deduplication requires expensive computational effort to remove duplicate data due to the vast size of big data. The paper attempts to reduce the time and comp...
Main Authors: | , |
---|---|
Format: | Article |
Language: | Arabic |
Published: |
University of Information Technology and Communications
2023-06-01
|
Series: | Iraqi Journal for Computers and Informatics |
Subjects: | |
Online Access: | https://ijci.uoitc.edu.iq/index.php/ijci/article/view/379 |
_version_ | 1827784205179813888 |
---|---|
author | Hashem B. Jehlol Loay E. George |
author_facet | Hashem B. Jehlol Loay E. George |
author_sort | Hashem B. Jehlol |
collection | DOAJ |
description | The data deduplication technique efficiently reduces and removes redundant data in big data storage systems. The main issue is that the data deduplication requires expensive computational effort to remove duplicate data due to the vast size of big data. The paper attempts to reduce the time and computation required for data deduplication stages. The chunking and hashing stage often requires a lot of calculations and time. This paper initially proposes an efficient new method to exploit the parallel processing of deduplication systems with the best performance. The proposed system is designed to use multicore computing efficiently. First, The proposed method removes redundant data by making a rough classification for the input into several classes using the histogram similarity and k-mean algorithm. Next, a new method for calculating the divisor list for each class was introduced to improve the chunking method and increase the data deduplication ratio. Finally, the performance of the proposed method was evaluated using three datasets as test examples. The proposed method proves that data deduplication based on classes and a multicore processor is much faster than a single-core processor. Moreover, the experimental results showed that the proposed method significantly improved the performance of Two Threshold Two Divisors (TTTD) and Basic Sliding Window BSW algorithms. |
first_indexed | 2024-03-11T16:00:07Z |
format | Article |
id | doaj.art-96048ac194734e45b0314eaec1d897c3 |
institution | Directory Open Access Journal |
issn | 2313-190X 2520-4912 |
language | Arabic |
last_indexed | 2024-03-11T16:00:07Z |
publishDate | 2023-06-01 |
publisher | University of Information Technology and Communications |
record_format | Article |
series | Iraqi Journal for Computers and Informatics |
spelling | doaj.art-96048ac194734e45b0314eaec1d897c32023-10-25T07:52:40ZaraUniversity of Information Technology and CommunicationsIraqi Journal for Computers and Informatics2313-190X2520-49122023-06-01491152210.25195/ijci.v49i1.379342THE USE OF ROUGH CLASSIFICATION AND TWO THRESHOLD TWO DIVISORS FOR DEDUPLICATIONHashem B. Jehlol0Loay E. George1Iraqi Commission for Computers and InformaticsUniversity of Information Technology and CommunicationThe data deduplication technique efficiently reduces and removes redundant data in big data storage systems. The main issue is that the data deduplication requires expensive computational effort to remove duplicate data due to the vast size of big data. The paper attempts to reduce the time and computation required for data deduplication stages. The chunking and hashing stage often requires a lot of calculations and time. This paper initially proposes an efficient new method to exploit the parallel processing of deduplication systems with the best performance. The proposed system is designed to use multicore computing efficiently. First, The proposed method removes redundant data by making a rough classification for the input into several classes using the histogram similarity and k-mean algorithm. Next, a new method for calculating the divisor list for each class was introduced to improve the chunking method and increase the data deduplication ratio. Finally, the performance of the proposed method was evaluated using three datasets as test examples. The proposed method proves that data deduplication based on classes and a multicore processor is much faster than a single-core processor. Moreover, the experimental results showed that the proposed method significantly improved the performance of Two Threshold Two Divisors (TTTD) and Basic Sliding Window BSW algorithms.https://ijci.uoitc.edu.iq/index.php/ijci/article/view/379big datadeduplicationhash functiondata classificationmulticore processing |
spellingShingle | Hashem B. Jehlol Loay E. George THE USE OF ROUGH CLASSIFICATION AND TWO THRESHOLD TWO DIVISORS FOR DEDUPLICATION Iraqi Journal for Computers and Informatics big data deduplication hash function data classification multicore processing |
title | THE USE OF ROUGH CLASSIFICATION AND TWO THRESHOLD TWO DIVISORS FOR DEDUPLICATION |
title_full | THE USE OF ROUGH CLASSIFICATION AND TWO THRESHOLD TWO DIVISORS FOR DEDUPLICATION |
title_fullStr | THE USE OF ROUGH CLASSIFICATION AND TWO THRESHOLD TWO DIVISORS FOR DEDUPLICATION |
title_full_unstemmed | THE USE OF ROUGH CLASSIFICATION AND TWO THRESHOLD TWO DIVISORS FOR DEDUPLICATION |
title_short | THE USE OF ROUGH CLASSIFICATION AND TWO THRESHOLD TWO DIVISORS FOR DEDUPLICATION |
title_sort | use of rough classification and two threshold two divisors for deduplication |
topic | big data deduplication hash function data classification multicore processing |
url | https://ijci.uoitc.edu.iq/index.php/ijci/article/view/379 |
work_keys_str_mv | AT hashembjehlol theuseofroughclassificationandtwothresholdtwodivisorsfordeduplication AT loayegeorge theuseofroughclassificationandtwothresholdtwodivisorsfordeduplication AT hashembjehlol useofroughclassificationandtwothresholdtwodivisorsfordeduplication AT loayegeorge useofroughclassificationandtwothresholdtwodivisorsfordeduplication |