activesoup
requests-html
activesoup | requests-html | |
---|---|---|
1 | 14 | |
43 | 13,595 | |
- | 0.3% | |
4.9 | 0.0 | |
almost 2 years ago | 28 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
activesoup
-
Sunday Daily Thread: What's everyone working on this week?
I dusted off an old web-scraping project (activesoup) that I wrote a few years ago and don't really use much myself anymore, but I think it gets a little bit of usage by others, since every few months I see a new star on github, or a small issue or feature request is filed. This week, it was a small feature request (with a PR too - great!).
requests-html
- will requests-html library work as selenium
-
8 Most Popular Python HTML Web Scraping Packages with Benchmarks
requests-html
-
How to batch scrape Wall Street Journal (WSJ)'s Financial Ratios Data?
Ya, thanks for advice. When using requests_html library, I am trying to lower down the speed using response.html.render(timeout=1000), but it raise Runtime error instead on Google Colab: https://github.com/psf/requests-html/issues/517.
- Note, the first time you ever run the render() method, it will download Chromium into your home directory (e.g. ~/.pyppeteer/). This only happens once.
-
Data scraping tools
For dynamic js, prefer requests-html with xpath selection.
-
Which string to lower case method to you use?
Example: requests-html which has a rather exhaustive README.md, but their dedicated page is not that helpful, if I remember correctly, and currently the domain is suspended.
-
Top python libraries/ frameworks that you suggest every one
When it comes to web scraping, the usual people recommend is beautifulsoup, lxml, or selenium. But I highly recommend people check out requests-html also. Its a library that is a happy medium between ease of use as in beautifulsoup and also good enough to be used for dynamic, javascript data where it would be overkill to use a browser emulator like selenium.
- How to make all https traffic in program go through a specific proxy?
-
Requests_html not working?
Quite possible. If you look at requests-html source code, it is simply one single python file that acts as a wrapper around a bunch of other packages, like requests, chromium, parse, lxml, etc., plus a couple convenience functions. So it could easily be some sort of bad dependency resolution.
-
Web Scraping in a professional setting: Selenium vs. BeautifulSoup
What I do is try to see if I can use requests_html first before trying selenium. requests_html is usually enough if I dont need to interact with browser widgets or if the authentication isnt too difficult to reverse engineer.
What are some alternatives?
undetected-chromedriver - Custom Selenium Chromedriver | Zero-Config | Passes ALL bot mitigation systems (like Distil / Imperva/ Datadadome / CloudFlare IUAM)
Scrapy - Scrapy, a fast high-level web crawling & scraping framework for Python.
MechanicalSoup - A Python library for automating interaction with websites.
requests - A simple, yet elegant HTTP library. [Moved to: https://github.com/psf/requests]
feedparser - Parse feeds in Python
RoboBrowser
pyspider - A Powerful Spider(Web Crawler) System in Python.
httpx - A next generation HTTP client for Python. 🦋
Grab - Web Scraping Framework
google-search-results-python - Google Search Results via SERP API pip Python Package
pyppeteer - Headless chrome/chromium automation library (unofficial port of puppeteer)