rdrview
Hacker News API
Our great sponsors
rdrview | Hacker News API | |
---|---|---|
10 | 82 | |
828 | 10,876 | |
- | 1.5% | |
4.1 | 0.0 | |
about 2 months ago | 8 months ago | |
C | ||
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rdrview
-
Mozilla: Readability.js
See also the C port here: https://github.com/eafer/rdrview/
It works well with text-mode browsers like w3m.
-
firefox 'naked'
i also use rdrview sometimes.
- Is there a CLI tool to download only the relevant text from an article? A mix of Curl and the tranqulity firefox addon?
-
w3m rocks
They both parse untrusted content content without sandboxing.
I typically send content through rdrview[0] before piping through w3m-sandbox[1], which should be pretty safe.
[0]: https://github.com/eafer/rdrview
[1]: https://git.sr.ht/~seirdy/bwrap-scripts/tree/trunk/item/w3m-...
-
reader, a minimal command line reader offering better readability of web pages on the CLI
Could have been nice to have this integrated to w3m. Somthing along the lines of rdrview.
- How to apply readability to already saved html pages?
-
Reading from the web offline and distraction-free
I do a lot of this work[3] (web to documents) and it's interesting to see other approaches. The medium image problem is something I've faced as well, but never got around to fixing. I'm planning to get a Remarkable soon, so will definitely be trying this out.
My personal solution has been https://github.com/captn3m0/url-to-epub/ (Node/readability), which I've tested against the entirety of Tor's original fiction collection[0] where it performs well enough (I'm biased). Another tool that does this beautifully well is percollate[1], but it doesn't give enough control of the metadata to the user - something I really care about.
I've also started to use rdrview[2], which is a C-port of the current Firefox implementation of "reader view". It is very unix-y, so it is easy to pipe content to it (I usually run it through tidy first). Quite helpful in building web-archiving or web-to-pdf or web-to-kindle pipelines easily.
[0]: https://www.tor.com/category/all-fiction/original-fiction/
[1]: https://github.com/danburzo/percollate
-
Show HN: Hackernews_tui – A Terminal UI to Browse Hacker News Discussions
Two projects that do this with nearly identical output:
- https://github.com/eafer/rdrview
- https://github.com/go-shiori/go-readability
Pipe the filtered HTML output into your favorite textual web browser for an ideal reading experience.
-
Newsboat / w3m show only article data
This may help if you can do some piping around it.. https://github.com/eafer/rdrview
-
Ask HN: Freelancer? Seeking freelancer? (January 2021)
SEEKING WORK | Argentina | Remote
Email: [email protected]
I'm a programmer, most familiar with C on Linux and Win32. I'll be happy to start a project from scratch, or to help support any old codebase. For a sample of my work please see rdrview [1], a small command line tool that found some success here on Hacker News; or [2], a naive filesystem implementation I've been working on.
My current rate is 20 USD/hour. For what it's worth, I have a background in math.
Hacker News API
-
Hacker News Rankings. Graphs of HN Posts Rankings
I recognise the huge amounts of effort involved in this and I applaud the moderators for keeping HN an interesting place to be.
That said, I think it's reasonable for us to have visibility on their manual interventions, and this could be easily surfaced via the Hacker News API (https://github.com/HackerNews/API), if the Story JSON included the values of "contro," "bury," and "gag" fields, which are currently opaque to users of the API
See https://medium.com/hacking-and-gonzo/how-hacker-news-ranking... for more discussion on terminology
-
Hacker News Stats: 2007–2022
Google probably stopped updating BigQuery when they started hosting live Hacker News data in Firebase: https://github.com/HackerNews/API
The live nature of the Firebase data is awesome, but the lack of ability to query is a loss.
-
Show HN: Tech Jobs on the Command Line
Nice work! I did something similar with a personal project a few months ago using an open source llm. Also, not sure if you know, there is an api you can use. https://github.com/HackerNews/API
-
Show HN: New Hacker News posts and comments in realtime
Hi HN, I made a live feed for viewing all of the new items on Hacker News in almost-realtime. It is a single, modern HTML file that doesn't use polling.
It works by establishing a websocket connection to the Hacker News Firebase database, to receive updates every time the HN server updates Firebase, which is about once every 30 seconds. This is very efficient, putting no load on HN's server and using minimal bandwidth.
To make the feed continuous despite the delay, it waits to display each item until exactly 30 seconds before displaying it. I think this is a fair tradeoff, it gives you a sense of how active HN is. For comparision, there are something like 6-7 thousand tweets every second.
Official HN Firebase API: https://github.com/HackerNews/API
Source code: https://github.com/jerbear2008/hn-live/blob/main/index.html
- Aplicando MVVM en Phoenix LiveView
-
Ask HN: How to track subjects in HN like a particular programming language?
You could use the API: https://github.com/HackerNews/API
-
Has Hacker News stopped uploading its dataset in 2022?
You can now get Hacker News data in real time from the Hacker News API powered by Firebase: https://github.com/HackerNews/API
This is great (real time!), but also kind of a pain (38+ millions individual http requests to get the whole thing).
Thankfully there's no authentication or apparent rate limiting. I fumbled my way through downloading the whole thing with curl. I screwed up a few times so made over 70 million requests in total.
Toy analysis of the data I downloaded here: https://public.tableau.com/app/profile/isna/viz/HackerNewsDa...
-
Ask HN: How do I find my most popular HN posts?
Get your submissions from this API
https://hacker-news.firebaseio.com/v0/user/ohjeez.json?print...
and then scan the "submitted" articles as described here
https://github.com/HackerNews/API
I have a crawler that sucks down all the posts from HN and then I read it into Pandas and write all sorts of queries. The boggle I have now is that I want to use the same system to (1) make sure YOShInOn never submits duplicate articles, and (2) have accurate vote and comment scores. (1) requires picking up articles as soon as possible, (2) requires waiting two weeks or so until the scores have settled down to what they are going to be. I guess I gotta come back and rescan things in 2 weeks so I have the right scores.
-
Show HN: Hacker News User Information on Hover
From looking at the Hacker News API, you might be able to use that instead of scraping from the DOM: https://github.com/HackerNews/API#users
-
Show HN: Hacker News Year in Review: 2023
You just need to make 38 million requests to the Firebase API document here https://github.com/HackerNews/API. :-)
I actually made many more than that because I screwed up a few times. Might clean up the compiled data and put it on Kaggle (GitHub? Torrent?), or anywhere else if people have suggestions.
What are some alternatives?
percollate - A command-line tool to turn web pages into readable PDF, EPUB, HTML, or Markdown docs.
hnrss - Custom, realtime RSS feeds for Hacker News
go-readability - Go package that cleans a HTML page for better readability.
hackernews - Hacker News web site source code mirror.
parser - 📜 Extract meaningful content from the chaos of a web page
laravel-localization - Easy localization for Laravel
hackernews-TUI - A Terminal UI to browse Hacker News
https-everywhere - A browser extension that encrypts your communications with many websites that offer HTTPS but still allow unencrypted connections.
w3m - Debian's w3m: WWW browsable pager
hnterm - :page_with_curl: Hacker News in the terminal
zimit - Make a ZIM file from any Web site and surf offline!
jfq - JSONata on the command line