Show simple item record

dc.contributor.authorHuang, H.
dc.contributor.authorKhan, Latifur
dc.contributor.authorZhou, S.
dc.date.accessioned2020-02-12T19:49:23Z
dc.date.available2020-02-12T19:49:23Z
dc.date.issued2019-05-11
dc.identifier.issn1386-7857
dc.identifier.urihttp://dx.doi.org/10.1007/s10586-019-02941-1
dc.identifier.urihttps://hdl.handle.net/10735.1/7265
dc.descriptionDue to copyright restrictions and/or publisher's policy full text access from Treasures at UT Dallas is restricted to current UTD affiliates (use the provided Link to Article).
dc.description.abstractDisk reliability is a serious problem in the big data foundation environment. Although the reliability of disk drives has greatly improved over the past few years, they are still the most vulnerable core components in the server. If they fail, the result can be catastrophic: it can take some days to recover data, sometimes data lost forever. These are unacceptable for some important data. XOR parity is a typical method to generate reliability syndrome, thus improving the reliability of the data. In practice, we find that the data is still likely to be lost. In most storage systems reliability improvements are achieved through the allocation of additional disks in Redundant Arrays of Independent Disks (RAID), which will increase the hardware costs, thus it will be very difficult for cost-constrained environments. Therefore, how to improve the data integrity without raising the hardware cost has aroused much interest of big data researchers. This challenge is when creating non-traditional RAID geometries, care must be taken to respect data dependence relationships to ensure that the new RAID strategy improves reliability, which is a NP-hard problem. In this paper, we present an approach for characterizing these challenges using high-dimension variants of the n-queens problem that enables performable solutions via the SAT solver MiniSAT, and use the greedy algorithm to analyze the queen’s attack domain, as a basis for reliability syndrome generation. A large number of experiments show that the approach proposed in this paper is feasible in software-defined data centers and the performance of the algorithm can meet the current requirements of the big data environment. © 2019, Springer Science+Business Media, LLC, part of Springer Nature.
dc.language.isoen
dc.publisherSpringer New York LLC
dc.rights©2019 Springer Science+Business Media, LLC, part of Springer Nature
dc.subjectBig data
dc.subjectComputational complexity
dc.subjectMagnetic disks--Reliability
dc.subjectComputer storage devices (Digital)
dc.titleClassified Enhancement Model for Big Data Storage Reliability Based on Boolean Satisfiability Problem
dc.type.genrearticle
dc.description.departmentErik Jonsson School of Engineering and Computer Science
dc.identifier.bibliographicCitationHuang, H., L. Khan, and S. Zhou. 2019. "Classified enhancement model for big data storage reliability based on boolean satisfiability problem." Cluster Computing, doi: 10.1007/s10586-019-02941-1
dc.source.journalCluster Computing
dc.contributor.utdAuthorKhan, Latifur
dc.contributor.VIAF51656251 (Khan, L)


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record