|8 days ago||19 days ago|
|GNU General Public License v3.0 or later||BSD 2-clause "Simplified" License|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
12 projects | dev.to | 9 Jan 2022
Feedback Request: Utilities for web scraping
2 projects | reddit.com/r/Python | 30 Dec 2021
You can have a look to https://scrapy.org/
Top 13 Web scraping tools in 2022
1 project | reddit.com/r/u_digitally_rajat | 28 Dec 2021
Scrapy is another tool on our list of the best web scraping tools. Scrapy is a collaborative open-source framework for extracting data from websites. It is a web scraping library for Python programmers who want to create scalable web crawlers.
How is ArchiveBox?
4 projects | reddit.com/r/selfhosted | 27 Dec 2021
If you need more advanced recursive spider/crawling ability beyond --depth=1, check out Browsertrix, Photon, or Scrapy and pipe the outputted URLs into ArchiveBox.
Do research web crawler programs exist?
1 project | reddit.com/r/AskProgramming | 17 Dec 2021
I would use Scrapy if I were to write this, in case you want to take a crack at it. Selenium is another option I would consider using. UIPath is a nocode option if you want to go that route.
Old guy programmer here, need to brush up on Python quickly!
13 projects | reddit.com/r/Python | 6 Dec 2021
scrapy for reading and processing data on websites
Web scraping data with pagination?
1 project | reddit.com/r/learnpython | 2 Dec 2021
What you want to use instead, is a web crawling framework like scrapy, which provides methods and classes to deal with all the common web scraping requirements. It has functions for pagination, it supports callbacks for using different parsers for different sub sites, it provides link extractors to find and follow urls, asynchronous request handling, logging, automatic request throttling, file exports for your results and many more. In short, it was written to provide all the tools you need to make writing web scrapers as comfortable and easy as possible.
What are the best 5 Web Scrapping API/Tool to scrape data?
1 project | reddit.com/r/u_ScrapperExpert | 1 Dec 2021
- Scrapfly (Probably the best web scraping api on the market) - If you liked ScraperAPI you should test them - you will see the difference - Scrapy (Web Scraping Framework from Zyte) - Browserless (Automation)
5 ways to keep your skills fresh after finishing a coding bootcamp
5 projects | dev.to | 28 Nov 2021
One way to improve your projects and coding skills is to try new models and libraries. For example, if you did classification with logistic regression, try also with random forest; if you used Tensorflow, now try Keras; if you scraped a website with BeautifulSoup, now do it with Scrapy. You get the point.
Good way to create a web scraper for multiple different sites
1 project | reddit.com/r/webdev | 18 Nov 2021
We haven't tracked posts mentioning feedparser yet.
Tracking mentions began in Dec 2020.
What are some alternatives?
requests-html - Pythonic HTML Parsing for Humans™
pyspider - A Powerful Spider(Web Crawler) System in Python.
MechanicalSoup - A Python library for automating interaction with websites.
Grab - Web Scraping Framework
portia - Visual scraping for Scrapy
pyppeteer - Headless chrome/chromium automation library (unofficial port of puppeteer)
colly - Elegant Scraper and Crawler Framework for Golang
Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
Crawley - Pythonic Crawling / Scraping Framework based on Non Blocking I/O operations.
cola - A high-level distributed crawling framework.