RCrawler: An R package for parallel web crawling and scraping

RCrawler is a contributed R package for domain-based web crawling and content scraping. As the first implementation of a parallel web crawler in the R environment, RCrawler can crawl, parse, store pages, extract contents, and produce data that can be directly employed for web content mining applicat...

Full description

Bibliographic Details
Main Authors: Salim Khalil, Mohamed Fakir
Format: Article
Language:English
Published: Elsevier 2017-01-01
Series:SoftwareX
Online Access:http://www.sciencedirect.com/science/article/pii/S2352711017300110
Description
Summary:RCrawler is a contributed R package for domain-based web crawling and content scraping. As the first implementation of a parallel web crawler in the R environment, RCrawler can crawl, parse, store pages, extract contents, and produce data that can be directly employed for web content mining applications. However, it is also flexible, and could be adapted to other applications. The main features of RCrawler are multi-threaded crawling, content extraction, and duplicate content detection. In addition, it includes functionalities such as URL and content-type filtering, depth level controlling, and a robot.txt parser. Our crawler has a highly optimized system, and can download a large number of pages per second while being robust against certain crashes and spider traps. In this paper, we describe the design and functionality of RCrawler, and report on our experience of implementing it in an R environment, including different optimizations that handle the limitations of R. Finally, we discuss our experimental results. Keywords: Web crawler, Web scraper, R package, Parallel crawling, Web mining, Data collection
ISSN:2352-7110