收录:
摘要:
In this article, an efficient and scalable distributed web crawler system based on Hadoop will be design and implement. In the paper, firstly the application of cloud computing in reptile field is introduced briefly, and then according to the current status of the crawler system, the specific use of Hadoop distributed and cloud computing features detailed design of a highly scalable crawler system, and finally the system Data statistics, under the same conditions, compared with the existing mature system, it is clear that the superiority of distributed web crawler. This advantage in the context of large data era of massive data is particularly important to climb. © 2017 IEEE.
关键词:
通讯作者信息:
电子邮件地址: