Jump to content

StormCrawler: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Jnioche (talk | contribs)
added link to infoq q and a
BG19bot (talk | contribs)
m top: WP:CHECKWIKI error fix for #61. Punctuation goes before References. Do general fixes if a problem exists. -
Line 36: Line 36:
The project is used in production by various companies.<ref>https://github.com/DigitalPebble/storm-crawler/wiki/Powered-By</ref>
The project is used in production by various companies.<ref>https://github.com/DigitalPebble/storm-crawler/wiki/Powered-By</ref>


[[Linux.com]] published a Q&A in October 2016 with the author of StormCrawler <ref>https://www.linux.com/news/stormcrawler-open-source-sdk-building-web-crawlers-apachestorm</ref>. InfoQ ran one in December 2016 <ref>http://www.infoq.com/news/2016/12/nioche-stormcrawler-web-crawler</ref>.
[[Linux.com]] published a Q&A in October 2016 with the author of StormCrawler.<ref>https://www.linux.com/news/stormcrawler-open-source-sdk-building-web-crawlers-apachestorm</ref> InfoQ ran one in December 2016.<ref>http://www.infoq.com/news/2016/12/nioche-stormcrawler-web-crawler</ref>


The project WIKI contains a list of videos and slides available online <ref>https://github.com/DigitalPebble/storm-crawler/wiki/Presentations</ref>.
The project WIKI contains a list of videos and slides available online.<ref>https://github.com/DigitalPebble/storm-crawler/wiki/Presentations</ref>


==References==
==References==

Revision as of 22:07, 16 December 2016

StormCrawler
Developer(s)DigitalPebble, Ltd.
Initial releaseSeptember 11, 2014 (2014-09-11)
Stable release
1.2 / October 31, 2016; 8 years ago (2016-10-31)
Repository
Written inJava
Operating systemCross-platform
TypeWeb crawler
LicenseApache License
Websitestormcrawler.net

StormCrawler  is an open source collection of resources for building low-latency, scalable web crawlers on Apache Storm. It is provided under Apache License and is written mostly in Java (programming language).

StormCrawler is modular and consists of a core module, which provides the basic building blocks of a web crawler such as fetching, parsing, URL filtering. Apart from the core components, the project also provide external resources, like for instance spout and bolts for Elasticsearch and Apache Solr or a ParserBolt which uses Apache Tika to parse various document formats.

The project is used in production by various companies.[1]

Linux.com published a Q&A in October 2016 with the author of StormCrawler.[2] InfoQ ran one in December 2016.[3]

The project WIKI contains a list of videos and slides available online.[4]

References

See also