queensland-traffic-conditions
queensland-traffic-conditions | scrape-san-mateo-fire-dispatch | |
---|---|---|
1 | 1 | |
1 | 2 | |
- | - | |
0.0 | 0.0 | |
5 days ago | 7 months ago | |
Python | ||
- | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
queensland-traffic-conditions
-
Git scraping: track changes over time by scraping to a Git repository
I've been promoting this idea for a few years now, and I've seen an increasing number of people put it into action.
A fun way to track how people are using this is with the git-scraping topic on GitHub:
https://github.com/topics/git-scraping?o=desc&s=updated
That page orders repos tagged git-scraping by most-recently-updated, which shows which scrapers have run most recently.
As I write this, just in the last minute repos that updated include:
https://github.com/drzax/queensland-traffic-conditions
https://github.com/jasoncartwright/bbcrss
https://github.com/jackharrhy/metrobus-timetrack-history
https://github.com/outages/bchydro-outages
scrape-san-mateo-fire-dispatch
-
Git scraping: track changes over time by scraping to a Git repository
Git is a key technology in this approach, because the value you get out of this form of scraping is the commit history - it's a way of turning a static source of information into a record of how that information changed over time.
I think it's fine to use the term "scraping" to refer to downloading a JSON file.
These days an increasing number of websites work by serving up JSON which is then turned into HTML by a client-side JavaScript app. The JSON often isn't a formally documented API, but you can grab it directly to avoid the extra step of processing the HTML.
I do run Git scrapers that process HTML as well. A couple of examples:
scrape-san-mateo-fire-dispatch https://github.com/simonw/scrape-san-mateo-fire-dispatch scrapes the HTML from http://www.firedispatch.com/iPhoneActiveIncident.asp?Agency=... and records both the original HTML and converted JSON in the repository.
scrape-hacker-news-by-domain https://github.com/simonw/scrape-hacker-news-by-domain uses my https://shot-scraper.datasette.io/ browser automation tool to convert an HTML page on Hacker News into JSON and save that to the repo. I wrote more about how that works here: https://simonwillison.net/2022/Dec/2/datasette-write-api/
What are some alternatives?
bbcrss - Scrapes the headlines from BBC News indexes every five minutes
shot-scraper - A command-line utility for taking automated screenshots of websites
gesetze-im-internet - Archive of German legal acts (weekly archive of gesetze-im-internet.de)
Geo-IP-Database - Automatically updated tree-formatted database from MaxMind database
carbon-intensity-forecast-tracking - The reliability of the National Grid's Carbon Intensity forecast
github-actions - Infromation and tips regarding GitHub Actions