Jump to content

StormCrawler: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
FrescoBot (talk | contribs)
 
(16 intermediate revisions by 10 users not shown)
Line 1: Line 1:
{{Short description|Open source web crawler}}
{{Multiple issues|
{{More citations needed|date=June 2016}}
{{Advert|date=June 2016}}
{{Notability|Products|date=September 2016}}
{{Notability|Products|date=September 2016}}
{{Primary sources|date=April 2017}}}}
{{Infobox software
{{Infobox software
| name = StormCrawler
| name = StormCrawler
| screenshot =
| screenshot =
| caption =
| caption =
| collapsible =
| collapsible =
| author =
| author =
| developer = DigitalPebble, Ltd.
| developer = DigitalPebble, Ltd.
| released = {{Start date|2014|09|11}}
| released = {{Start date|2014|09|11}}
| discontinued =
| discontinued =
| latest release version = 1.14
| latest release version = 2.8
| latest release date = {{Start date and age|2019|05|12}}
| latest release date = {{Start date and age|2023|03|29}}
| latest preview version =
| latest preview version =
| latest preview date =
| latest preview date =
| programming language = [[Java (programming language)|Java]]
| programming language = [[Java (programming language)|Java]]
| platform =
| platform =
| size =
| size =
| language =
| language =
| genre = [[Web crawler]]
| genre = [[Web crawler]]
| license = [[Apache License]]
| license = [[Apache License]]
| website = {{url|stormcrawler.net}}
| website = {{url|stormcrawler.net}}
}}
}}


'''StormCrawler''' is an [[open-source software|open-source]] collection of resources for building low-latency, scalable [[web crawler]]s on [[Storm (event processor)|Apache Storm]]. It is provided under [[Apache License]] and is written mostly in [[Java (programming language)]].
'''StormCrawler''' is an [[open-source software|open-source]] collection of resources for building low-latency, scalable [[web crawler]]s on [[Storm (event processor)|Apache Storm]]. It is provided under [[Apache License]] and is written mostly in [[Java (programming language)]].


StormCrawler is modular and consists of a core module, which provides the basic building blocks of a web crawler such as fetching, parsing, URL filtering. Apart from the core components, the project also provide external resources, like for instance spout and bolts for [[Elasticsearch]] and [[Apache Solr]] or a ParserBolt which uses [[Apache Tika]] to parse various document formats.
StormCrawler is modular and consists of a core module, which provides the basic building blocks of a web crawler such as fetching, parsing, URL filtering. Apart from the core components, the project also provides external resources, like for instance spout and bolts for [[Elasticsearch]] and [[Apache Solr]] or a ParserBolt which uses [[Apache Tika]] to parse various document formats.


The project is used in production by various companies.<ref>{{cite web|author= |url=https://github.com/DigitalPebble/storm-crawler/wiki/Powered-By |title=Powered By · DigitalPebble/storm-crawler Wiki · GitHub |website=Github.com |date=2017-03-02 |accessdate=2017-04-19}}</ref>
The project is used by various organisations,<ref>{{cite web|author= |url=https://github.com/DigitalPebble/storm-crawler/wiki/Powered-By |title=Powered By · DigitalPebble/storm-crawler Wiki · GitHub |website=Github.com |date=2017-03-02 |accessdate=2017-04-19}}</ref> notably [[Common Crawl]]<ref>{{Cite web |title=News Dataset Available – Common Crawl |url=http://commoncrawl.org/2016/10/news-dataset-available/}}</ref> for generating a large and publicly available dataset of news.


[[Linux.com]] published a Q&A in October 2016 with the author of StormCrawler.<ref>{{cite web|url=https://www.linux.com/news/stormcrawler-open-source-sdk-building-web-crawlers-apachestorm |title=StormCrawler: An Open Source SDK for Building Web Crawlers with ApacheStorm &#124; Linux.com &#124; The source for Linux information |website=Linux.com |date=2016-10-12 |accessdate=2017-04-19}}</ref> InfoQ ran one in December 2016.<ref>{{cite web|author= |url=http://www.infoq.com/news/2016/12/nioche-stormcrawler-web-crawler |title=Julien Nioche on StormCrawler, Open-Source Crawler Pipelines Backed by Apache Storm |website=Infoq.com |date=2016-12-15 |accessdate=2017-04-19}}</ref> A comparative benchmark with [[Apache Nutch]] was published in January 2017 on dzone.com.<ref>{{cite web|author= |url=https://dzone.com/articles/the-battle-of-the-crawlers-apache-nutch-vs-stormcr |title=The Battle of the Crawlers: Apache Nutch vs. StormCrawler - DZone Big Data |website=Dzone.com |date= |accessdate=2017-04-19}}</ref>
[[Linux.com]] published a Q&A in October 2016 with the author of StormCrawler.<ref>{{cite web|url=https://www.linux.com/news/stormcrawler-open-source-sdk-building-web-crawlers-apachestorm |title=StormCrawler: An Open Source SDK for Building Web Crawlers with ApacheStorm &#124; Linux.com &#124; The source for Linux information |website=Linux.com |date=2016-10-12 |accessdate=2017-04-19}}</ref> InfoQ ran one in December 2016.<ref>{{cite web|author= |url=http://www.infoq.com/news/2016/12/nioche-stormcrawler-web-crawler |title=Julien Nioche on StormCrawler, Open-Source Crawler Pipelines Backed by Apache Storm |website=Infoq.com |date=2016-12-15 |accessdate=2017-04-19}}</ref> A comparative benchmark with [[Apache Nutch]] was published in January 2017 on dzone.com.<ref>{{cite web|author= |url=https://dzone.com/articles/the-battle-of-the-crawlers-apache-nutch-vs-stormcr |title=The Battle of the Crawlers: Apache Nutch vs. StormCrawler - DZone Big Data |website=Dzone.com |date= |accessdate=2017-04-19}}</ref>


Several research papers mentioned the use of StormCrawler in 2018, in particular:
Several research papers mentioned the use of StormCrawler, in particular:
* Crawling the German Health Web: Exploratory Study and Graph Analysis.<ref>{{cite journal|url=https://www.jmir.org/2020/7/e17853/ |title=Crawling the German Health Web: Exploratory Study and Graph Analysis|year=2020 |doi=10.2196/17853 |last1=Zowalla |first1=Richard |last2=Wetter |first2=Thomas |last3=Pfeifer |first3=Daniel |journal=Journal of Medical Internet Research |volume=22 |issue=7 |pages=e17853 |pmid=32706701 |pmc=7414401 |doi-access=free }}</ref>
* The generation of a multi-million page corpus for the Persian language.<ref>{{cite web|url=https://www.researchgate.net/publication/325324201 |title=MirasText: An Automatically Generated Text Corpus for Persian}}</ref>
* The generation of a multi-million page corpus for the Persian language.<ref>{{cite web|url=https://www.researchgate.net/publication/325324201 |title=MirasText: An Automatically Generated Text Corpus for Persian}}</ref>
* The SIREN - Security Information Retrieval and Extraction eNgine.<ref>
* The SIREN - Security Information Retrieval and Extraction engine.<ref>
{{cite book|doi=10.1007/978-3-319-76941-7_81|title = Advances in Information Retrieval|volume = 10772|pages = 811–814|series = Lecture Notes in Computer Science|year = 2018|last1 = Sanagavarapu|first1 = Lalit Mohan|last2 = Mathur|first2 = Neeraj|last3 = Agrawal|first3 = Shriyansh|last4 = Reddy|first4 = Y. Raghu|isbn = 978-3-319-76940-0}}</ref>
{{cite book|doi=10.1007/978-3-319-76941-7_81|title = Advances in Information Retrieval|volume = 10772|pages = 811–814|series = Lecture Notes in Computer Science|year = 2018|last1 = Sanagavarapu|first1 = Lalit Mohan|last2 = Mathur|first2 = Neeraj|last3 = Agrawal|first3 = Shriyansh|last4 = Reddy|first4 = Y. Raghu| chapter=SIREN - Security Information Retrieval and Extraction eNgine |isbn = 978-3-319-76940-0}}</ref>


The project WIKI contains a list of videos and slides available online.<ref>{{cite web|author= |url=https://github.com/DigitalPebble/storm-crawler/wiki/Presentations |title=Presentations · DigitalPebble/storm-crawler Wiki · GitHub |website=Github.com |date=2017-04-04 |accessdate=2017-04-19}}</ref>
The project Wiki contains a list of videos and slides available online.<ref>{{cite web|author= |url=https://github.com/DigitalPebble/storm-crawler/wiki/Presentations |title=Presentations · DigitalPebble/storm-crawler Wiki · GitHub |website=Github.com |date=2017-04-04 |accessdate=2017-04-19}}</ref>

StormCrawler is used notably by [[Common_Crawl]] <ref>http://commoncrawl.org/2016/10/news-dataset-available/</ref> for generating a large and publicly available dataset of news.


==See also==
==See also==
{{Portal|Free and open-source software}}
* [[Apache Storm]]
* [[Apache Storm]]
* [[Apache Nutch]]
* [[Apache Nutch]]

Latest revision as of 12:05, 26 May 2024

StormCrawler
Developer(s)DigitalPebble, Ltd.
Initial releaseSeptember 11, 2014 (2014-09-11)
Stable release
2.8 / March 29, 2023; 20 months ago (2023-03-29)
Repository
Written inJava
TypeWeb crawler
LicenseApache License
Websitestormcrawler.net

StormCrawler is an open-source collection of resources for building low-latency, scalable web crawlers on Apache Storm. It is provided under Apache License and is written mostly in Java (programming language).

StormCrawler is modular and consists of a core module, which provides the basic building blocks of a web crawler such as fetching, parsing, URL filtering. Apart from the core components, the project also provides external resources, like for instance spout and bolts for Elasticsearch and Apache Solr or a ParserBolt which uses Apache Tika to parse various document formats.

The project is used by various organisations,[1] notably Common Crawl[2] for generating a large and publicly available dataset of news.

Linux.com published a Q&A in October 2016 with the author of StormCrawler.[3] InfoQ ran one in December 2016.[4] A comparative benchmark with Apache Nutch was published in January 2017 on dzone.com.[5]

Several research papers mentioned the use of StormCrawler, in particular:

  • Crawling the German Health Web: Exploratory Study and Graph Analysis.[6]
  • The generation of a multi-million page corpus for the Persian language.[7]
  • The SIREN - Security Information Retrieval and Extraction engine.[8]

The project Wiki contains a list of videos and slides available online.[9]

See also

[edit]

References

[edit]
  1. ^ "Powered By · DigitalPebble/storm-crawler Wiki · GitHub". Github.com. 2017-03-02. Retrieved 2017-04-19.
  2. ^ "News Dataset Available – Common Crawl".
  3. ^ "StormCrawler: An Open Source SDK for Building Web Crawlers with ApacheStorm | Linux.com | The source for Linux information". Linux.com. 2016-10-12. Retrieved 2017-04-19.
  4. ^ "Julien Nioche on StormCrawler, Open-Source Crawler Pipelines Backed by Apache Storm". Infoq.com. 2016-12-15. Retrieved 2017-04-19.
  5. ^ "The Battle of the Crawlers: Apache Nutch vs. StormCrawler - DZone Big Data". Dzone.com. Retrieved 2017-04-19.
  6. ^ Zowalla, Richard; Wetter, Thomas; Pfeifer, Daniel (2020). "Crawling the German Health Web: Exploratory Study and Graph Analysis". Journal of Medical Internet Research. 22 (7): e17853. doi:10.2196/17853. PMC 7414401. PMID 32706701.
  7. ^ "MirasText: An Automatically Generated Text Corpus for Persian".
  8. ^ Sanagavarapu, Lalit Mohan; Mathur, Neeraj; Agrawal, Shriyansh; Reddy, Y. Raghu (2018). "SIREN - Security Information Retrieval and Extraction eNgine". Advances in Information Retrieval. Lecture Notes in Computer Science. Vol. 10772. pp. 811–814. doi:10.1007/978-3-319-76941-7_81. ISBN 978-3-319-76940-0.
  9. ^ "Presentations · DigitalPebble/storm-crawler Wiki · GitHub". Github.com. 2017-04-04. Retrieved 2017-04-19.