library-of-alexandria
jsoup
library-of-alexandria | jsoup | |
---|---|---|
23 | 27 | |
108 | 10,645 | |
0.9% | - | |
7.6 | 9.1 | |
25 days ago | about 1 month ago | |
Java | Java | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
library-of-alexandria
-
How I archived 100 million PDF documents... (Part 1)
After a quick Google search, I figured out that only less than 1% of ancient texts survived to the modern day. This unfortunate fact was my inspiration to start working on an ambitious web crawling and archival project, called the Library of Alexandria.
-
A newspaper vanished from the internet. Did someone pay to kill it? | *digs into link rot and the loss of digital archives*
Here is a link to the latest releases: https://github.com/bottomless-archive-project/library-of-alexandria/releases
- What do you do when your PC ran out internal HDD cables?
-
Putting 5,998,794 books on IPFS
What do you mean by storage system? Just curious because I'm working on a similar project.
-
r/DataHoarder community is mentioned in this: The Enduring Allure of the Library of Alexandria | On the Media | WNYC Studios
If anybody is interested about the project mentioned in the interview, it's available here: https://github.com/bottomless-archive-project/library-of-alexandria
-
Anyone here with 50TB,100TB+ of personal storage that isn't mostly movies/TV/porn ??
I'm collecting documents. Working on an app suite called Library of Alexandria. Got 91 million docs atm (mostly PDFs) and it's only going up. All of that fits on around 100 TB with gzip compression.
- Archive for software / comp sci books / ebooks?
- Bakancslista
-
Good document classification library in Java
I'm working on an OSS called Library of Alexandria. It is an application that is built to collect, archive, and make searchable various (mostly PDF) documents. I have a little bit more than 90 million documents archived. My next step is to somehow label/classify them.
-
I was wondering what y'all hoarded on your epic setups. I use only one NAS containing 2.8 TB of my personal data. Looking forward to seeing what you hoard.
90 TB of PDFs. I'm working on the Library of Alexandria project. Just a fun little library, nothing more. 😅😅😅
jsoup
- FLaNK Stack Weekly for 20 June 2023
-
Russia news visualisation on steroids
2e. The HTML parsing library is in app-kt. It's called JSoup https://jsoup.org/
-
Looking for direction, guidance on in-home call button.
For parsing the webpage in Java or Kotlin you can use Jsoup
-
Web Scraping Google With Java
Jsoup — It is a Java library that can be used for both extracting and parsing HTML.
-
How I archived 100 million PDF documents... (Part 1)
Finally, at this point, I was able to go through a bunch of webpages (parsing them in the process with JSoup), grab all the links that contained pdf files based on the file extension then download them. Unsurprisingly, most of the pages (~60-80%) ended up being unavailable (404 Not Found and friends). After a quick cup of coffee, I got the 10.000 documents on my hard drive. This is when I realized that I have one more problem to solve.
-
Regex to find/replace text within <angle brackets> and ignore rest of the text
It might be better to use an HTML parser instead (this one looks good at first glance: https://jsoup.org/) although as long as you can make certain assumptions over the HTML input (for example that it'll always have those two attributes in this order) using regular expressions to parse it is feasible.
-
API pentru preturi la combustibil
Poti folosi JSoup in Java https://jsoup.org/
- One more question regarding a program I wanna write
-
UIUC MCS - CS 427 Review - Software Engineering
There are five machine problems. None of the assignments took me longer than two to three hours, and the last one I completed in less than an hour. The MPs had been recently redesigned and tied together nicely. Each one covered a different course topic in the jsoup code base.
-
Any suggestions for good open source Java codebases to study(With below criteria)?
https://github.com/jhy/jsoup jsoup is a java library for parsing HTML. Intuitive API and extremely well readable code. I would definitely recommend this.
What are some alternatives?
Paperless-ng - A supercharged version of paperless: scan, index and archive all your physical documents
Apache Nutch - Apache Nutch is an extensible and scalable web crawler
Archive.org-Downloader - Python3 script to download archive.org books in PDF format
Crawler4j - Open Source Web Crawler for Java
mixnode-warcreader-java - Read Web ARChive (WARC) files in Java.
storm-crawler - A scalable, mature and versatile web crawler based on Apache Storm
Paperless - Scan, index, and archive all of your paper documents
Sparkler - Spark-Crawler: Apache Nutch-like crawler that runs on Apache Spark.
precomp-cpp - Precomp, C++ version - further compress already compressed files
JsonPath - Java JsonPath implementation
java-warc - Read Web ARChive (WARC) files in Java.
yq - Command-line YAML, XML, TOML processor - jq wrapper for YAML/XML/TOML documents