timbos-hn-reader

Creates thumbnails and extracts metadata from websites linked to from Hacker News and presents the information in a clear, information-rich feed so you can find what you actually want to read. (by timoteostewart)

Timbos-hn-reader Alternatives

Similar projects and alternatives to timbos-hn-reader based on common topics and language

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better timbos-hn-reader alternative or higher similarity.

timbos-hn-reader reviews and mentions

Posts with mentions or reviews of timbos-hn-reader. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-29.
  • You can't just assume UTF-8
    3 projects | news.ycombinator.com | 29 Apr 2024
    Fascinating topic. There are two ways the user/client/browser receives reports about the character encoding of content. And there are hefty caveats about how reliable those reports are.

    (1) First, the Web server usually reports a character encoding, a.k.a. charset, in the HTTP headers that come with the content. Of course, the HTTP headers are not part of the HTML document but are rather part of the overhead of what the Web server sends to the user/client/browser. (The HTTP headers and the `head` element of an HTML document are entirely different.) One of these HTTP headers is called Content-Type, and conventionally this header often reports a character encoding, e.g., "Content-Type: text/html; charset=UTF-8". So this is one place the character encoding is reported.

    If the actual content is not an (X)HTML file, the HTTP header might be the only report the user/client/browser receives about the character encoding. Consider accessing a plain text file via HTTP. The text file isn't likely to itself contain information about what character encoding it uses. The HTTP header of "Content-Type: text/plain; charset=UTF-8" might be the only character encoding information that is reported.

    (2) Now, if the content is an (X)HTML page, a charset encoding is often also reported in the content itself, generally in the HTML document's head section in a meta tag such as '' or ''. But consider, this again is a report of a character encoding.

    Consider the case of a program that generates web pages using a boilerplate template still using an ancient default of ISO-8859-1 in the meta charset tag of its head element, even though the body content that goes into the template is being pulled from a database that spits out a default of utf-8. Boom. Mismatch. Janky code is spitting out mismatched and inaccurate character encoding information every day.

    Or to consider web servers. Consider a web server whose config file contains the typo "uft-8" because somebody fat-fingered while updating the config (I've seen this in pages!). Or consider a web server that uses a global default of "utf-8" in its outgoing HTTP headers even when the content being served is a hodge-podge of UTF-8, WINDOWS-1251, WINDOWS-1252, and ISO-8859-1. This too happens all the time.

    I think the most important takeaway is that with both HTTP headers and meta tags, there's no intrinsic link between the character encoding being reported and the actual character encoding of the content. What a Web server tells me and what's in the meta tag in the markup just count as two reports. They might be accurate, they might not be. If it really matters to me what the character encoding is, there's nothing for it but to determine the character encoding myself.

    I have a Hacker News reader, https://www.thnr.net, and my program downloads the URL for every HN story with an outgoing link. Because I'm fastidious and I want to know what a file actually is, I have a function `get_textual_mimetype` that analyzes the content of what the URL's web server sends me. I have seen binary files sent with a "UTF-8" Content-Type header. I have seen UTF-8 files sent with a "inode/x-empty" Content-Type header. So I download the content, and I use `iconv` and `isutf8` to get some information about what encoding it might be. I use `xmlwf` to check if it's well-formed XML. I use `jq` to check whether it's valid JSON. I use `libmagic`. My goal is to determine with a high degree of certainty whether what's been sent to me is an application/pdf, an iamge/webp, a text/html, an application/xhtml+xml, a text/x-csrc, or what. Only a rigorous analysis will tell you the truth. (If anyone is curious, the source for `get_textual_mimetype` is in the repo for my HN reader project: https://github.com/timoteostewart/timbos-hn-reader/blob/main... )

  • Skim HN’s story feeds but with added metadata about linked articles
    1 project | news.ycombinator.com | 4 Jan 2023
    Timbo’s “Hacker News” Reader (THNR) ingests the story feeds of HN’s news, new, best, classic, and active story feeds and displays the stories with thumbnail images from the linked article plus creature comforts like the estimated reading time, the name or handle of the article’s author, the percentages of programming languages (for GitHub links), a preview of first page of PDFs along with total PDF page count, and more. My aim in surfacing all this metadata from the linked articles was to help me find those stories I want to read, and I think it’ll serve that purpose for others too.

    The comments link for each story goes straight to HN’s regular comments page for each story if you want to read or make comments.

    THNR’s about page: https://dev.thnr.net/about/

    GitHub repo: https://github.com/timoteostewart/timbos-hn-reader

Stats

Basic timbos-hn-reader repo stats
2
2
6.4
7 days ago

timoteostewart/timbos-hn-reader is an open source project licensed under MIT License which is an OSI approved license.

The primary programming language of timbos-hn-reader is HTML.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com