pypandoc
shot-scraper
pypandoc | shot-scraper | |
---|---|---|
5 | 16 | |
804 | 1,535 | |
- | - | |
6.8 | 7.1 | |
11 days ago | about 1 month ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pypandoc
-
Web Scraping in Python β The Complete Guide
I recently used [0] Playwright for Python and [1] pypandoc to build a scraper that fetches a webpage and turns the content into sane markdown so that it can be passed into an AI coding chat [2].
They are both very gentle dependencies to add to a project. Both packages contain built in or scriptable methods to install their underlying platform-specific binary dependencies. This means you don't need to ask end users to use some complex, platform-specific package manager to install playwright and pandoc.
Playwright let's you scrape pages that rely on js. Pandoc is great at turning HTML into sensible markdown. Below is an excerpt of the openai pricing docs [3] that have been scraped to markdown [4] in this manner.
[0] https://playwright.dev/python/docs/intro
[1] https://github.com/JessicaTegner/pypandoc
[2] https://github.com/paul-gauthier/aider
[3] https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turb...
[4] https://gist.githubusercontent.com/paul-gauthier/95a1434a28d...
## GPT-4 and GPT-4 Turbo
- GitHub Accelerator: our first cohort and what's next
-
Converting multiple docx to multiple txt filed
Use Pypandoc
shot-scraper
-
I want to create IMDB for Open source projects
I had one of these recently! https://github.com/simonw/shot-scraper/pull/133/files
They're /incredibly/ rare though.
-
2024-03-01 listening in on the neighborhood
If anyone wants the raw data, it's available in window._Flourish_data variable on https://flo.uri.sh/visualisation/16818696/embed
Which means you can extract it with my https://shot-scraper.datasette.io/ tool like this:
shot-scraper javascript \
-
Web Scraping in Python β The Complete Guide
I strongly recommend adding Playwright to your set of tools for Python web scraping. It's by far the most powerful and best designed browser automation tool I've ever worked with.
I use it for my shot-scraper CLI tool: https://shot-scraper.datasette.io/ - which lets you scrape web pages directly from the command line by running JavaScript against pages to extract JSON data: https://shot-scraper.datasette.io/en/stable/javascript.html
- A command-line utility for taking automated screenshots of websites
-
Donβt Build a General Purpose API to Power Your Own Front End (2021)
This is exactly what the `Accept` HTTP header is for https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Ac...
I think the author is generally correct that all JSON should be provided in a single request, but if you want to prove it, then you should be able to change your accept header to and from `application/json`/`text/html seeing nearly identical data.
In fact, this is what both GitLab and Github do. Try it out!
`curl -L https://github.com/simonw/shot-scraper` (text/html)
`curl --header "Accept: application/json" -L https://github.com/simonw/shot-scraper` (application/json)
-
Git scraping: track changes over time by scraping to a Git repository
Git is a key technology in this approach, because the value you get out of this form of scraping is the commit history - it's a way of turning a static source of information into a record of how that information changed over time.
I think it's fine to use the term "scraping" to refer to downloading a JSON file.
These days an increasing number of websites work by serving up JSON which is then turned into HTML by a client-side JavaScript app. The JSON often isn't a formally documented API, but you can grab it directly to avoid the extra step of processing the HTML.
I do run Git scrapers that process HTML as well. A couple of examples:
scrape-san-mateo-fire-dispatch https://github.com/simonw/scrape-san-mateo-fire-dispatch scrapes the HTML from http://www.firedispatch.com/iPhoneActiveIncident.asp?Agency=... and records both the original HTML and converted JSON in the repository.
scrape-hacker-news-by-domain https://github.com/simonw/scrape-hacker-news-by-domain uses my https://shot-scraper.datasette.io/ browser automation tool to convert an HTML page on Hacker News into JSON and save that to the repo. I wrote more about how that works here: https://simonwillison.net/2022/Dec/2/datasette-write-api/
- Web Scraping via JavaScript Runtime Heap Snapshots (2022)
-
Need help with downloading a section of multiple sites as pdf files.
You can use shot-scraper: https://github.com/simonw/shot-scraper
-
Ask HN: Small scripts, hacks and automations you're proud of?
I have a neat Hacker News scraping setup that I'm really pleased with.
The problem: I want to know when content from one of my sites is submitted to Hacker News, and keep track of the points and comments over time. I also want to be alerted when it happens.
Solution: https://github.com/simonw/scrape-hacker-news-by-domain/
This repo does a LOT of things.
It's an implementation of my Git scraping pattern - https://simonwillison.net/2020/Oct/9/git-scraping/ - in that it runs a script once an hour to check for more content.
It scrapes https://news.ycombinator.com/from?site=simonwillison.net (scraping the HTML because this particular feature isn't supported by the Hacker News API) using shot-scraper - a tool I built for command-line browser automation: https://shot-scraper.datasette.io/
The scraper works by running this JavaScript against the page and recording the resulting JSON to the Git repository: https://github.com/simonw/scrape-hacker-news-by-domain/blob/...
That solves the "monitor and record any changes" bit.
But... I want alerts when my content shows up.
I solve that using three more tools I built: https://datasette.io/ and https://datasette.io/plugins/datasette-atom and https://datasette.cloud/
This script here runs to push the latest scraped JSON to my SQLite database hosted using my in-development SaaS platform, Datasette Cloud: https://github.com/simonw/scrape-hacker-news-by-domain/blob/...
I defined this SQL view https://simon.datasette.cloud/data/hacker_news_posts_atom which shows the latest data in the format required by the datasette-atom plugin.
Which means I can subscribe to the resulting Atom feed (add .atom to that URL) in NetNewsWire and get alerted when my content shows up on Hacker News!
I wrote a bit more about how this all works here: https://simonwillison.net/2022/Dec/2/datasette-write-api/
-
Show HN: Plus β Self Updating Screenshots
Sounds a lot like Simon Willison's open source project shot-scraper
https://github.com/simonw/shot-scraper
What are some alternatives?
taffy - A high performance rust-powered UI layout library
gmail-sidebar-drive - A simple gmail add on to display all the drive folders and files in sidebar.
sniffnet - Comfortably monitor your Internet traffic π΅οΈββοΈ
zettelkasten - Creating notes with the zettelkasten note taking method and storing all notes on github
formbricks - Open Source Survey Platform
scrape-san-mateo-fire-dispatch
nuxt - The Intuitive Vue Framework.
bbcrss - Scrapes the headlines from BBC News indexes every five minutes
trpc - π§ββοΈ Move Fast and Break Nothing. End-to-end typesafe APIs made easy.
scrape-hacker-news-by-domain - Scrape HN to track links from specific domains
responsively-app - A modified web browser that helps in responsive web development. A web developer's must have dev-tool.
SeleniumBase - π Python's all-in-one framework for web crawling, scraping, testing, and reporting. Supports pytest. UC Mode provides stealth. Includes many tools.