lexbor
pyppeteer
Our great sponsors
lexbor | pyppeteer | |
---|---|---|
9 | 13 | |
649 | 2,810 | |
3.5% | 1.7% | |
1.4 | 2.4 | |
8 days ago | 2 months ago | |
C | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
lexbor
-
Created a performance-focused HTML5 parser for Ruby, trying to be API-compatible with Nokogiri
It supports both CSS selectors and XPath like Nokogiri, but with separate engines - parsing and CSS engine by Lexbor, XPath engine by libxml2. (Nokogiri internally converts CSS selectors to XPath syntax, and uses XPath engine for all searches).
-
Andreas Kling (of SerenityOS fame) is building a new Linux browser using SerenityOS libraries
An HTML parser, probably the simplest relatively modern example I could find is 1MB https://github.com/lexbor/lexbor (haven't used it, but might look more into it now that I know it exists.)
-
The State of Web Scraping in 2021
Lazyweb link: https://github.com/rushter/selectolax
although I don't follow the need to have what appears to be two completely separate HTML parsing C libraries as dependencies; seeing this in the readme for Modest gives me the shivers because lxml has _seen some shit_
> Modest is a fast HTML renderer implemented as a pure C99 library with no outside dependencies.
although its other dep seems much more cognizant about the HTML5 standard, for whatever that's worth: https://github.com/lexbor/lexbor#lexbor
---
> It looks like the author of the article just googled some libraries for each language and didn't research the topic
Heh, oh, new to the Internet, are you? :-D
-
Libraries for retrivieng html data from website
Lexbor is here: https://github.com/lexbor/lexbor
-
What second language to learn after Python?
Well, regarding HTML5, what I've found was libxml (does not support tag-soup HTML5), https://github.com/lexbor/lexbor, for which I was unable to find good documentation ( see https://lexbor.com/docs/lexbor/#dom), Apache Xerces (appears to not support tag-soup HTML5 as well), and Gumbo, which does not appear to be active and to support selectors and XPath (although there are libraries that add that).
-
You can't parse [X]HTML with regex
I think we've all (mostly?) tried it. It really is the Wild West of the web when you're trying to parse other people's HTML, though.
I've played around with this parser which is extremely quick. https://github.com/lexbor/lexbor
-
How SerpApi sped up data extraction from HTML from 3s to 800ms (or How to profile and optimize Ruby code and C extension)
I’m glad to have the opportunity to contribute to an open-source project that is used by thousands of people. Hopefully, we will speed up Nokogiri (or XML parser it uses) to match the performance of html5ever or lexbor at some point in the future. 800 ms to extract data from HTML is still too much.
pyppeteer
- What have you automated with python?
- Note, the first time you ever run the render() method, it will download Chromium into your home directory (e.g. ~/.pyppeteer/). This only happens once.
- How to start Web scraping with python?
-
PyAutoGUI with CSS Selector
If you're talking about CSS I reckon you want to click/input things on a website inside your browser. In this case you would use a web driver which can automate a web browser like Chrome or Firefox. Something like Helium, Selenium or pyppeteer.
-
The State of Web Scraping in 2021
In my own experience puppeteer is much better/capable than selenium but the problem is that puppeteer requires nodejs. its python-wrapper https://github.com/pyppeteer/pyppeteer was not as good as selenium when you like to use python.
Pyppetteer is feature complete and worth noting: https://github.com/pyppeteer/pyppeteer
-
Scraping data from interative web charts python
For complex pages I usually use Puppeteer (from Google). A Python port is here: https://github.com/pyppeteer/pyppeteer but that's not widely used as the official JavaScript Version.
-
Scrape Google Ad Results with Python
using headless browser or browser automation frameworks, such as * selenium or pyppeteer.
- Web Scraping 101 with Python
What are some alternatives?
puppeteer - Headless Chrome Node.js API
Scrapy - Scrapy, a fast high-level web crawling & scraping framework for Python.
playwright-python - Python version of the Playwright testing and automation library.
myhtml - Fast C/C++ HTML 5 Parser. Using threads.
Playwright - Playwright is a framework for Web Testing and Automation. It allows testing Chromium, Firefox and WebKit with a single API.
selenium-python-helium - Selenium-python but lighter: Helium is the best Python library for web automation.
requests - A simple, yet elegant, HTTP library.
selectolax - Python binding to Modest and Lexbor engines (fast HTML5 parser with CSS selectors).
scraper - A scraper for EmulationStation written in Go using hashing
scraper - Nodejs web scraper. Contains a command line, docker container, terraform module and ansible roles for distributed cloud scraping. Supported databases: SQLite, MySQL, PostgreSQL. Supported headless clients: Puppeteer, Playwright, Cheerio, JSdom.
utls - Fork of the Go standard TLS library, providing low-level access to the ClientHello for mimicry purposes.