Big Data Cleaning Algorithms in Cloud Computing
DOI:
https://doi.org/10.3991/ijoe.v9i3.2765Keywords:
big data, cleaning algorithms, cloud computing, data cleaning, Map-ReduceAbstract
Big data cleaning is one of the important research issues in cloud computing theory. The existing data cleaning algorithms assume all the data can be loaded into the main memory at one-time, which are infeasible for big data. To this end, based on the knowledge base, a data cleaning algorithm is proposed in cloud computing by Map-Reduce. It extracts atomic knowledge of the selected nodes firstly, then analyzes their relations, deletes the same objects, builds an atomic knowledge sequence based on weights, lastly cleans data according to the sequence. The experimental results show that the cloud computing environment big data algorithm is effective and feasible, and has better expansibility.
Downloads
Published
How to Cite
Issue
Section
License
The submitting author warrants that the submission is original and that she/he is the author of the submission together with the named co-authors; to the extend the submission incorporates text passages, figures, data or other material from the work of others, the submitting author has obtained any necessary permission.
Articles in this journal are published under the Creative Commons Attribution Licence (CC-BY What does this mean?). This is to get more legal certainty about what readers can do with published articles, and thus a wider dissemination and archiving, which in turn makes publishing with this journal more valuable for you, the authors.
By submitting an article the author grants to this journal the non-exclusive right to publish it. The author retains the copyright and the publishing rights for his article without any restrictions.