wi-page
pup
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
wi-page
-
Ask HN: What are the best tools for web scraping in 2022?
[4] https://github.com/altilunium/wi-page (Scrap wikipedia to get most active contributors that contribute to a certain article)
- Show HN: Wi-Page – Rank Wikipedia Article's Contributors by Byte Counts
pup
-
script to download some notes
And lnk=$(curl -s https://www.selfstudys.com$url |grep "PDFFlip" | cut -d '"' -f 6) to lnk=$(curl -s https://www.selfstudys.com$url | pup "div#PDFF attr{source}" ) here pup will print content of source attribute from div tag with id PDFF i dont know that much about html & css so this is what i came up with. but i am sure you can also select class & make list of suburls from them. check out the video from bugswriter on pup or read docs from git hub for more info github link: https://github.com/ericchiang/pup
-
What monitoring tool do you use or recommend?
jq is pretty amazing. If you are comfortable with its jquery-like CSS selector syntax, then I should also mention a couple similar cli utilities that apply it to HTML: htmlp and pup.
-
Creating a data scraper as a beginner?
Regex is not a great tool for parsing web pages. Open up a browser dev tools window and select a bit of the page. Right click > copy... XPath expression or CSS selector. A proper web scraping tool will accept either of those. No muss, no fuss. You can even use simple command line tools: xpath or pup
- December 5, 2022: FLiP Stack Weekly
-
Show HN: A tool like jq, but for parsing HTML
This is HTML to JSON, written in Rust, and there's also pup[1] which I found out about just the other day on HN[2] which uses a very similar syntax (CSS selectors) but outputs HTML and is written in Go.
I can see room for both though it would interesting to have a more detailed comparison to go on (e.g. types of HTML, speed etc).
[1] https://github.com/ericchiang/pup
[2] https://news.ycombinator.com/item?id=33805732
- Pup: Parsing HTML at the command line
-
pup: Parsing HTML at the Command Line
It looks like the project became inactive for a bit and there are alternatives such as htmlq, etc. https://github.com/ericchiang/pup/issues/150
-
Converting field before delimiter to uppercase and how to replace with multiple newlines
Another tool worth mentioning is pup - it can produce JSON output which means you can pipe it to jq
What are some alternatives?
kiwix-hotspot - Kiwix Hotspot Image Creator (Desktop) for Windows/macOS/Linux
htmlq - Like jq, but for HTML.
estela - estela, an elastic web scraping cluster 🕸
xidel - Command line tool to download and extract data from HTML/XML pages or JSON-APIs, using CSS, XPath 3.0, XQuery 3.0, JSONiq or pattern matching. It can also create new or transformed XML/HTML/JSON documents.
danker - Compute PageRank on >3 billion Wikipedia links on off-the-shelf hardware.
gron - Make JSON greppable!
scrapy-redis - Redis-based components for Scrapy.
yq - Command-line YAML, XML, TOML processor - jq wrapper for YAML/XML/TOML documents
polite - Be nice on the web
cascadia - Go cascadia package command line CSS selector
curl-impersonate - curl-impersonate: A special build of curl that can impersonate Chrome & Firefox
ddgr - :duck: DuckDuckGo from the terminal