StormCrawler: Difference between revisions
changed version number + added link to article in Linux.com |
m WP:CHECKWIKI error fixes using AWB |
||
Line 1: | Line 1: | ||
{{Multiple issues|{{refimprove|date=June 2016}}{{advert|date=June 2016}} |
{{Multiple issues|{{refimprove|date=June 2016}}{{advert|date=June 2016}} |
||
⚫ | |||
{{notability|date=September 2016}} |
{{notability|date=September 2016}} |
||
⚫ | |||
{{Infobox software |
{{Infobox software |
||
Line 34: | Line 34: | ||
StormCrawler is modular and consists of a core module, which provides the basic building blocks of a web crawler such as fetching, parsing, URL filtering. Apart from the core components, the project also provide external resources, like for instance spout and bolts for [[Elasticsearch]] and [[Apache Solr]] or a ParserBolt which uses [[Apache Tika]] to parse various document formats. |
StormCrawler is modular and consists of a core module, which provides the basic building blocks of a web crawler such as fetching, parsing, URL filtering. Apart from the core components, the project also provide external resources, like for instance spout and bolts for [[Elasticsearch]] and [[Apache Solr]] or a ParserBolt which uses [[Apache Tika]] to parse various document formats. |
||
The project is used in production by various companies |
The project is used in production by various companies.<ref>https://github.com/DigitalPebble/storm-crawler/wiki/Powered-By</ref> |
||
Linux.com published a Q&A in October 2016 with the author of StormCrawler |
Linux.com published a Q&A in October 2016 with the author of StormCrawler.<ref>https://www.linux.com/news/stormcrawler-open-source-sdk-building-web-crawlers-apachestorm</ref> The project WIKI contains a list of videos and slides available online.<ref>https://github.com/DigitalPebble/storm-crawler/wiki/Presentations</ref> |
||
==References== |
==References== |
||
Line 47: | Line 47: | ||
* [[Apache Solr]] |
* [[Apache Solr]] |
||
* [[Elasticsearch]] |
* [[Elasticsearch]] |
||
[[Category:Web crawlers]] |
[[Category:Web crawlers]] |
Revision as of 07:26, 1 November 2016
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Developer(s) | DigitalPebble, Ltd. |
---|---|
Initial release | September 11, 2014 |
Stable release | 1.2
/ October 31, 2016 |
Repository | |
Written in | Java |
Operating system | Cross-platform |
Type | Web crawler |
License | Apache License |
Website | stormcrawler |
StormCrawler is an open source collection of resources for building low-latency, scalable web crawlers on Apache Storm. It is provided under Apache License and is written mostly in Java (programming language).
StormCrawler is modular and consists of a core module, which provides the basic building blocks of a web crawler such as fetching, parsing, URL filtering. Apart from the core components, the project also provide external resources, like for instance spout and bolts for Elasticsearch and Apache Solr or a ParserBolt which uses Apache Tika to parse various document formats.
The project is used in production by various companies.[1]
Linux.com published a Q&A in October 2016 with the author of StormCrawler.[2] The project WIKI contains a list of videos and slides available online.[3]