static-builds
Statically-built, dependency free binaries of software packages for Linux. (by supriyo-biswas)
fetchurls
A bash script to spider a site, follow links, and fetch urls (with built-in filtering) into a generated text file. (by adamdehaven)
static-builds | fetchurls | |
---|---|---|
1 | 4 | |
4 | 123 | |
- | - | |
6.0 | 0.0 | |
4 days ago | over 2 years ago | |
Shell | Shell | |
- | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
static-builds
Posts with mentions or reviews of static-builds.
We have used some of these posts to build our list of alternatives
and similar projects.
fetchurls
Posts with mentions or reviews of fetchurls.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-01-29.
-
Best way to back up entire website on a schedule
You could also look into something like archivebox.io, but it doesn't really mirror so great. fetchurls can make an URL list though which could in turn be fed into archivebox. Archivebox would maybe be handy if you wanted the wget download along with a PDF print + maybe sending to Wayback Machine.
-
How do I get a list of all the individual track links of a bandcamp album/user?
fetchurls tries to grab them. Can put the domain as artist.bandcamp.com
-
Options to backup https://trythatsoap.com/?
Welcome. Since Archivebox doesn't crawl pages, you might be interested in something like fetchurls as well.
- is there a way to take "snapshots" of every page of a website?
What are some alternatives?
When comparing static-builds and fetchurls you can also consider the following projects:
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
Yacy - Distributed Peer-to-Peer Web Search Engine and Intranet Search Appliance
bpkg - Lightweight bash package manager
browsertrix-crawler - Run a high-fidelity browser-based crawler in a single Docker container
kusionstack.io - Source for kusionstack.io site
bach - Bach Testing Framework