scrapyd
pup
scrapyd | pup | |
---|---|---|
6 | 52 | |
2,848 | 8,000 | |
0.7% | - | |
5.9 | 0.0 | |
3 months ago | about 1 month ago | |
Python | HTML | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
scrapyd
-
Multiple scrapy spiders automation? Executing batch scraping manually now
Scrapyd is a good option to run your scrapers remotely in the cloud. Adding a Scrapyd dashboard makes the experience better.
-
Ask HN: What are the best tools for web scraping in 2022?
8. If you decide to have your own infrastructure, you can use https://github.com/scrapy/scrapyd.
-
The Complete Scrapyd Guide - Deploy, Schedule & Run Your Scrapy Spiders
Scrapyd is one of the most popular options. Created by the same developers that developed Scrapy itself, Scrapyd is a tool for running Scrapy spiders in production on remote servers so you don't need to run them on a local machine.
-
The Complete Guide To ScrapydWeb, Get Setup In 3 Minutes!
ScrapydWeb is the most popular open source Scrapyd admin dashboards. Boasting 2,400 Github stars, ScrapydWeb has been fully embraced by the Scrapy community.
-
Any paid services for hosting scrapy spiders?
or scrapyd -> https://github.com/scrapy/scrapyd
-
Daily Share Price Notifications using Python, SQL and Africas Talking - Part Two
While I am aware that we could use Scrapyd to host your spiders and actually send requests, alongside with ScrapydWeb, I personally prefer to keep my scraper deployment simple, quick, and free. If you are interested in this alternative instead, check out this post written by Harry Wang.
pup
-
script to download some notes
And lnk=$(curl -s https://www.selfstudys.com$url |grep "PDFFlip" | cut -d '"' -f 6) to lnk=$(curl -s https://www.selfstudys.com$url | pup "div#PDFF attr{source}" ) here pup will print content of source attribute from div tag with id PDFF i dont know that much about html & css so this is what i came up with. but i am sure you can also select class & make list of suburls from them. check out the video from bugswriter on pup or read docs from git hub for more info github link: https://github.com/ericchiang/pup
-
What monitoring tool do you use or recommend?
jq is pretty amazing. If you are comfortable with its jquery-like CSS selector syntax, then I should also mention a couple similar cli utilities that apply it to HTML: htmlp and pup.
-
Creating a data scraper as a beginner?
Regex is not a great tool for parsing web pages. Open up a browser dev tools window and select a bit of the page. Right click > copy... XPath expression or CSS selector. A proper web scraping tool will accept either of those. No muss, no fuss. You can even use simple command line tools: xpath or pup
- December 5, 2022: FLiP Stack Weekly
-
Show HN: A tool like jq, but for parsing HTML
This is HTML to JSON, written in Rust, and there's also pup[1] which I found out about just the other day on HN[2] which uses a very similar syntax (CSS selectors) but outputs HTML and is written in Go.
I can see room for both though it would interesting to have a more detailed comparison to go on (e.g. types of HTML, speed etc).
[1] https://github.com/ericchiang/pup
[2] https://news.ycombinator.com/item?id=33805732
- Pup: Parsing HTML at the command line
-
pup: Parsing HTML at the Command Line
It looks like the project became inactive for a bit and there are alternatives such as htmlq, etc. https://github.com/ericchiang/pup/issues/150
-
Converting field before delimiter to uppercase and how to replace with multiple newlines
Another tool worth mentioning is pup - it can produce JSON output which means you can pipe it to jq
What are some alternatives?
Gerapy - Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Django and Vue.js
htmlq - Like jq, but for HTML.
scrapydweb - Web app for Scrapyd cluster management, Scrapy log analysis & visualization, Auto packaging, Timer tasks, Monitor & Alert, and Mobile UI. DEMO :point_right:
xidel - Command line tool to download and extract data from HTML/XML pages or JSON-APIs, using CSS, XPath 3.0, XQuery 3.0, JSONiq or pattern matching. It can also create new or transformed XML/HTML/JSON documents.
SpiderKeeper - admin ui for scrapy/open source scrapinghub
gron - Make JSON greppable!
polite - Be nice on the web
yq - Command-line YAML, XML, TOML processor - jq wrapper for YAML/XML/TOML documents
puppeteer - Node.js API for Chrome
cascadia - Go cascadia package command line CSS selector
estela - estela, an elastic web scraping cluster 🕸
ddgr - :duck: DuckDuckGo from the terminal