url-collector
An application that crawls the Common Crawl corpus for URLs with the specified file extensions. (by bottomless-archive-project)
library-of-alexandria
Library of Alexandria (LoA in short) is a project that aims to collect and archive documents from the internet. (by bottomless-archive-project)
url-collector | library-of-alexandria | |
---|---|---|
2 | 23 | |
0 | 108 | |
- | 0.9% | |
5.1 | 7.6 | |
over 2 years ago | 19 days ago | |
Java | Java | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
url-collector
Posts with mentions or reviews of url-collector.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-10-02.
-
240 million URLs for PDF and DOC files
Well, I used Java. The app is still somewhat under construction, but it is available here: https://github.com/bottomless-archive-project/url-collector
library-of-alexandria
Posts with mentions or reviews of library-of-alexandria.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-01-11.
-
How I archived 100 million PDF documents... (Part 1)
After a quick Google search, I figured out that only less than 1% of ancient texts survived to the modern day. This unfortunate fact was my inspiration to start working on an ambitious web crawling and archival project, called the Library of Alexandria.
-
A newspaper vanished from the internet. Did someone pay to kill it? | *digs into link rot and the loss of digital archives*
Here is a link to the latest releases: https://github.com/bottomless-archive-project/library-of-alexandria/releases
- What do you do when your PC ran out internal HDD cables?
-
Putting 5,998,794 books on IPFS
What do you mean by storage system? Just curious because I'm working on a similar project.
-
r/DataHoarder community is mentioned in this: The Enduring Allure of the Library of Alexandria | On the Media | WNYC Studios
If anybody is interested about the project mentioned in the interview, it's available here: https://github.com/bottomless-archive-project/library-of-alexandria
-
Anyone here with 50TB,100TB+ of personal storage that isn't mostly movies/TV/porn ??
I'm collecting documents. Working on an app suite called Library of Alexandria. Got 91 million docs atm (mostly PDFs) and it's only going up. All of that fits on around 100 TB with gzip compression.
- Archive for software / comp sci books / ebooks?
- Bakancslista
-
Good document classification library in Java
I'm working on an OSS called Library of Alexandria. It is an application that is built to collect, archive, and make searchable various (mostly PDF) documents. I have a little bit more than 90 million documents archived. My next step is to somehow label/classify them.
-
I was wondering what y'all hoarded on your epic setups. I use only one NAS containing 2.8 TB of my personal data. Looking forward to seeing what you hoard.
90 TB of PDFs. I'm working on the Library of Alexandria project. Just a fun little library, nothing more. 😅😅😅
What are some alternatives?
When comparing url-collector and library-of-alexandria you can also consider the following projects:
fscrawler - Elasticsearch File System Crawler (FS Crawler)
Paperless-ng - A supercharged version of paperless: scan, index and archive all your physical documents
SpotifyDiscoveryBot - A Java-based bot that automatically crawls for new releases by your followed artists on Spotify. Never miss a release again!
Archive.org-Downloader - Python3 script to download archive.org books in PDF format
mixnode-warcreader-java - Read Web ARChive (WARC) files in Java.
Paperless - Scan, index, and archive all of your paper documents
precomp-cpp - Precomp, C++ version - further compress already compressed files
java-warc - Read Web ARChive (WARC) files in Java.
document-location-database
url-collector vs fscrawler
library-of-alexandria vs Paperless-ng
url-collector vs SpotifyDiscoveryBot
library-of-alexandria vs Archive.org-Downloader
library-of-alexandria vs mixnode-warcreader-java
library-of-alexandria vs Paperless
library-of-alexandria vs precomp-cpp
library-of-alexandria vs java-warc
library-of-alexandria vs document-location-database