Our great sponsors
-
library-of-alexandria
Library of Alexandria (LoA in short) is a project that aims to collect and archive documents from the internet.
After a quick Google search, I figured out that only less than 1% of ancient texts survived to the modern day. This unfortunate fact was my inspiration to start working on an ambitious web crawling and archival project, called the Library of Alexandria.
-
I found one Java library on Github (thanks Mixnode) that was able to read these files. Unfortunately, it was not maintained for the past couple of years. I picked it up and forked it to make it a little easier to use. (A couple of years later this repo was moved under the Bottomless Archive project as well.)
-
InfluxDB
Collect and Analyze Billions of Data Points in Real Time. Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge.
-
I found one Java library on Github (thanks Mixnode) that was able to read these files. Unfortunately, it was not maintained for the past couple of years. I picked it up and forked it to make it a little easier to use. (A couple of years later this repo was moved under the Bottomless Archive project as well.)
-
I found one Java library on Github (thanks Mixnode) that was able to read these files. Unfortunately, it was not maintained for the past couple of years. I picked it up and forked it to make it a little easier to use. (A couple of years later this repo was moved under the Bottomless Archive project as well.)
-
So, when I started to view the documents, a lot of them simply failed to open. I had to look around for a library that could verify PDF documents. I had some experience with PDFBox in the past, so it seemed to be a good go-to solution. It had no way to verify documents by default, but it could open and parse them and that was enough to filter out the incorrect ones. It felt a little bit strange just to read the whole PDF into the memory to verify if it is correct or not, but hey I needed a simple fix for now and it worked really well.
-
Finally, at this point, I was able to go through a bunch of webpages (parsing them in the process with JSoup), grab all the links that contained pdf files based on the file extension then download them. Unsurprisingly, most of the pages (~60-80%) ended up being unavailable (404 Not Found and friends). After a quick cup of coffee, I got the 10.000 documents on my hard drive. This is when I realized that I have one more problem to solve.