Spidey
scrapyrt
Our great sponsors
Spidey | scrapyrt | |
---|---|---|
2 | 3 | |
11 | 816 | |
- | 1.0% | |
9.5 | 6.8 | |
11 days ago | 2 months ago | |
C# | Python | |
Apache License 2.0 | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Spidey
-
I need data from a website. It is viable to create an API that scrapes the website and returns the data on an endpoint?
Didn't get a chance to reply earlier but depending on what you're trying to do, you might want a web crawler. I have a crawler on Github that I built for scraping in instances where someone doesn't have an API. If you go this route, I suggest doing it as a background task and go off cached data.
-
Recursion needed in small crawler
This may be overkill but I have library out there for building web crawlers. Spidey is the library. I'm not suggesting you use it but you could look at it for ideas. It uses a multithreaded, producer/consumer approach that avoids recursion and stack overflow issues. Use a queue, pull from the queue for each url, push new urls on when you find them. Do need to optimize my code a bit more but if it helps at all. But your issue is most likely the fact that you're finding a link to the page you are currently on. HashSet or List of found URLs would solve the issue.
scrapyrt
- New to python and scrapy stuff but need this project to work so that I can do my data research and stuff easily in the future.
-
Scrap data and create a Rest API
Alternatively if you want to use scrapy there's a brilliant API addition called scrapyRT which wraps http API on your scrapy project.
-
Scraping name and location info from Linkedin Profile URL using Apps scripts
Put ScrapyRT in place to expose the scraper via web service
What are some alternatives?
scrapy-sanoma-kuntavaalit2021 - Fetch Sanoma kuntavaalit 2021 data [Moved to: https://github.com/raspi/scrapy-kuntavaalit2021-sanoma]
twisted-iocpsupport - `twisted-iocpsupport` is an extension module for the Twisted `iocp` reactor to use the Windows I/O Completion Ports (IOCP) networking API. You should not need to install it directly or interact with its API; it is a dependency of Twisted on Windows platforms.
scrapy-proxycrawl-middleware - Scrapy middleware interface to scrape using ProxyCrawl proxy service
cryptoCMD - Cryptocurrency historical price data library in Python. Data from https://coinmarketcap.com.
google-play-scraper - Google play scraper for Python inspired by <facundoolano/google-play-scraper>
courlan - Clean, filter and sample URLs to optimize data collection – includes spam, content type and language filters
jarchive-clues - Web crawler to collect Jeopardy! clues from https://j-archive.com
amazon_price_tracker - A cool Scrapy spider that notifies price drop in a product you crave to buy!
newspaperjs - News extraction and scraping. Article Parsing
alltheplaces - A set of spiders and scrapers to extract location information from places that post their location on the internet.
OpenWebCrawler - This is an open source Python web crawler which is meant to crawl the entire internet starting from a single URL, the goal of this project is to make an efficient, open source, powerful internet-scale web crawler which can be used in any applications and forked in any way as long as the forked project is also open source. Enjoy!
munich-scripts - Some useful scripts simplifying bureaucracy