|8 days ago||2 months ago|
|Apache License 2.0||MIT License|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Created a performance-focused HTML5 parser for Ruby, trying to be API-compatible with Nokogiri
2 projects | reddit.com/r/ruby | 20 Dec 2022
It supports both CSS selectors and XPath like Nokogiri, but with separate engines - parsing and CSS engine by Lexbor, XPath engine by libxml2. (Nokogiri internally converts CSS selectors to XPath syntax, and uses XPath engine for all searches).
Andreas Kling (of SerenityOS fame) is building a new Linux browser using SerenityOS libraries
3 projects | reddit.com/r/linux | 4 Jul 2022
An HTML parser, probably the simplest relatively modern example I could find is 1MB https://github.com/lexbor/lexbor (haven't used it, but might look more into it now that I know it exists.)
The State of Web Scraping in 2021
Lazyweb link: https://github.com/rushter/selectolax
although I don't follow the need to have what appears to be two completely separate HTML parsing C libraries as dependencies; seeing this in the readme for Modest gives me the shivers because lxml has _seen some shit_
> Modest is a fast HTML renderer implemented as a pure C99 library with no outside dependencies.
although its other dep seems much more cognizant about the HTML5 standard, for whatever that's worth: https://github.com/lexbor/lexbor#lexbor
> It looks like the author of the article just googled some libraries for each language and didn't research the topic
Heh, oh, new to the Internet, are you? :-D
Libraries for retrivieng html data from website
3 projects | reddit.com/r/cpp_questions | 9 Oct 2021
Lexbor is here: https://github.com/lexbor/lexbor
What second language to learn after Python?
3 projects | reddit.com/r/learnprogramming | 14 May 2021
Well, regarding HTML5, what I've found was libxml (does not support tag-soup HTML5), https://github.com/lexbor/lexbor, for which I was unable to find good documentation ( see https://lexbor.com/docs/lexbor/#dom), Apache Xerces (appears to not support tag-soup HTML5 as well), and Gumbo, which does not appear to be active and to support selectors and XPath (although there are libraries that add that).
You can't parse [X]HTML with regex
3 projects | news.ycombinator.com | 5 Mar 2021
I think we've all (mostly?) tried it. It really is the Wild West of the web when you're trying to parse other people's HTML, though.
I've played around with this parser which is extremely quick. https://github.com/lexbor/lexbor
How SerpApi sped up data extraction from HTML from 3s to 800ms (or How to profile and optimize Ruby code and C extension)
11 projects | dev.to | 2 Feb 2021
I’m glad to have the opportunity to contribute to an open-source project that is used by thousands of people. Hopefully, we will speed up Nokogiri (or XML parser it uses) to match the performance of html5ever or lexbor at some point in the future. 800 ms to extract data from HTML is still too much.
What have you automated with python?
2 projects | reddit.com/r/Python | 28 Jan 2023
Note, the first time you ever run the render() method, it will download Chromium into your home directory (e.g. ~/.pyppeteer/). This only happens once.
4 projects | reddit.com/r/programmingcirclejerk | 28 Jul 2022
How to start Web scraping with python?
2 projects | reddit.com/r/learnpython | 22 Nov 2021
PyAutoGUI with CSS Selector
2 projects | reddit.com/r/learnpython | 28 Oct 2021
If you're talking about CSS I reckon you want to click/input things on a website inside your browser. In this case you would use a web driver which can automate a web browser like Chrome or Firefox. Something like Helium, Selenium or pyppeteer.
The State of Web Scraping in 2021
In my own experience puppeteer is much better/capable than selenium but the problem is that puppeteer requires nodejs. its python-wrapper https://github.com/pyppeteer/pyppeteer was not as good as selenium when you like to use python.
Pyppetteer is feature complete and worth noting: https://github.com/pyppeteer/pyppeteer
Scraping data from interative web charts python
3 projects | reddit.com/r/learnpython | 29 Jul 2021
Scrape Google Ad Results with Python
2 projects | dev.to | 18 May 2021
using headless browser or browser automation frameworks, such as * selenium or pyppeteer.
Web Scraping 101 with Python
5 projects | news.ycombinator.com | 10 Feb 2021
What are some alternatives?
puppeteer - Headless Chrome Node.js API
Scrapy - Scrapy, a fast high-level web crawling & scraping framework for Python.
playwright-python - Python version of the Playwright testing and automation library.
myhtml - Fast C/C++ HTML 5 Parser. Using threads.
Playwright - Playwright is a framework for Web Testing and Automation. It allows testing Chromium, Firefox and WebKit with a single API.
selenium-python-helium - Selenium-python but lighter: Helium is the best Python library for web automation.
requests - A simple, yet elegant, HTTP library.
selectolax - Python binding to Modest and Lexbor engines (fast HTML5 parser with CSS selectors).
scraper - A scraper for EmulationStation written in Go using hashing
scraper - Nodejs web scraper. Contains a command line, docker container, terraform module and ansible roles for distributed cloud scraping. Supported databases: SQLite, MySQL, PostgreSQL. Supported headless clients: Puppeteer, Playwright, Cheerio, JSdom.
utls - Fork of the Go standard TLS library, providing low-level access to the ClientHello for mimicry purposes.