selectolax
requests-html
selectolax | requests-html | |
---|---|---|
6 | 14 | |
970 | 13,584 | |
- | 0.2% | |
7.7 | 0.0 | |
about 2 months ago | 19 days ago | |
Cython | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
selectolax
-
GitHub – GSA/code-gov: An informative repo for all Code.gov repos
https://github.com/rushter/selectolax#simple-benchmark )
(Apache Nutch is a Java-based web crawler which supports e.g. CommonCrawl (which backs various foundational LLMs)) https://en.wikipedia.org/wiki/Apache_Nutch#Search_engines_bu... . But extruct extracts more types of metadata and data than Nutch AFAIU: https://github.com/scrapinghub/extruct )
datasette-graphql adds a GraphQL HTTP API to a SQLite database:
-
8 Most Popular Python HTML Web Scraping Packages with Benchmarks
selectolax
- High performance code in Python
-
Web Scraping with Python: Everything you need to know to get started (2022)
try this... https://github.com/rushter/selectolax
-
The State of Web Scraping in 2021
Lazyweb link: https://github.com/rushter/selectolax
although I don't follow the need to have what appears to be two completely separate HTML parsing C libraries as dependencies; seeing this in the readme for Modest gives me the shivers because lxml has _seen some shit_
> Modest is a fast HTML renderer implemented as a pure C99 library with no outside dependencies.
although its other dep seems much more cognizant about the HTML5 standard, for whatever that's worth: https://github.com/lexbor/lexbor#lexbor
---
> It looks like the author of the article just googled some libraries for each language and didn't research the topic
Heh, oh, new to the Internet, are you? :-D
- Show HN: Fast HTML5 parser for Python with multiple backends
requests-html
- will requests-html library work as selenium
-
8 Most Popular Python HTML Web Scraping Packages with Benchmarks
requests-html
-
How to batch scrape Wall Street Journal (WSJ)'s Financial Ratios Data?
Ya, thanks for advice. When using requests_html library, I am trying to lower down the speed using response.html.render(timeout=1000), but it raise Runtime error instead on Google Colab: https://github.com/psf/requests-html/issues/517.
- Note, the first time you ever run the render() method, it will download Chromium into your home directory (e.g. ~/.pyppeteer/). This only happens once.
-
Data scraping tools
For dynamic js, prefer requests-html with xpath selection.
-
Which string to lower case method to you use?
Example: requests-html which has a rather exhaustive README.md, but their dedicated page is not that helpful, if I remember correctly, and currently the domain is suspended.
-
Top python libraries/ frameworks that you suggest every one
When it comes to web scraping, the usual people recommend is beautifulsoup, lxml, or selenium. But I highly recommend people check out requests-html also. Its a library that is a happy medium between ease of use as in beautifulsoup and also good enough to be used for dynamic, javascript data where it would be overkill to use a browser emulator like selenium.
- How to make all https traffic in program go through a specific proxy?
-
Requests_html not working?
Quite possible. If you look at requests-html source code, it is simply one single python file that acts as a wrapper around a bunch of other packages, like requests, chromium, parse, lxml, etc., plus a couple convenience functions. So it could easily be some sort of bad dependency resolution.
-
Web Scraping in a professional setting: Selenium vs. BeautifulSoup
What I do is try to see if I can use requests_html first before trying selenium. requests_html is usually enough if I dont need to interact with browser widgets or if the authentication isnt too difficult to reverse engineer.
What are some alternatives?
lxml - The lxml XML toolkit for Python
Scrapy - Scrapy, a fast high-level web crawling & scraping framework for Python.
lexbor - Lexbor is development of an open source HTML Renderer library. https://lexbor.com
MechanicalSoup - A Python library for automating interaction with websites.
html5lib - Standards-compliant library for parsing and serializing HTML documents and fragments in Python
requests - A simple, yet elegant HTTP library. [Moved to: https://github.com/psf/requests]
pyppeteer - Headless chrome/chromium automation library (unofficial port of puppeteer)
feedparser - Parse feeds in Python
pyquery - A jquery-like library for python
RoboBrowser
gazpacho - 🥫 The simple, fast, and modern web scraping library
pyspider - A Powerful Spider(Web Crawler) System in Python.