storm-crawler
Apache Nutch
storm-crawler | Apache Nutch | |
---|---|---|
- | 3 | |
858 | 2,818 | |
1.2% | 0.9% | |
8.8 | 8.0 | |
8 days ago | 15 days ago | |
HTML | Java | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
storm-crawler
We haven't tracked posts mentioning storm-crawler yet.
Tracking mentions began in Dec 2020.
Apache Nutch
- Distributed Web Crawler
-
How impossible is this task that's been assigned to my coworkers and I?
Hi, I have read few comments under the post, there are great suggestions also your questions regarding task are on the point. But i believe handling this with a script might be not easy. If i were you, I would use Apache Nutch or similar open source software/library.I have used Nutch for my thesis for similar task that i had to scrap a lot of blog pages and the other pages they were referencing. You can configure all the points in your questions. Like How deep you want to scrap, what kind of content you want to extract? Or there are places, you can extend or modify the behavior, so you can implement your custom logic to parse the html. https://nutch.apache.org
What are some alternatives?
jsoup - jsoup: the Java HTML parser, built for HTML editing, cleaning, scraping, and XSS safety.
Crawler4j - Open Source Web Crawler for Java
Sparkler - Spark-Crawler: Apache Nutch-like crawler that runs on Apache Spark.
PeARS-orchard - This is the development version of PeARS, the people's search engine. More compact but less robust than PeARS-federated. If you just want to use PeARS in real life, use PeARS-federated instead.
Apache Hive - Apache Hive
scaling-to-distributed-crawling - Repository for the Mastering Web Scraping in Python: Scaling to Distributed Crawling blogpost with the final code.
ache - ACHE is a web crawler for domain-specific search.
crawlee - Crawlee—A web scraping and browser automation library for Node.js to build reliable crawlers. In JavaScript and TypeScript. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, and other files from websites. Works with Puppeteer, Playwright, Cheerio, JSDOM, and raw HTTP. Both headful and headless mode. With proxy rotation.