You can't just assume UTF-8

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • timbos-hn-reader

    Creates thumbnails and extracts metadata from websites linked to from Hacker News and presents the information in a clear, information-rich feed so you can find what you actually want to read.

  • Fascinating topic. There are two ways the user/client/browser receives reports about the character encoding of content. And there are hefty caveats about how reliable those reports are.

    (1) First, the Web server usually reports a character encoding, a.k.a. charset, in the HTTP headers that come with the content. Of course, the HTTP headers are not part of the HTML document but are rather part of the overhead of what the Web server sends to the user/client/browser. (The HTTP headers and the `head` element of an HTML document are entirely different.) One of these HTTP headers is called Content-Type, and conventionally this header often reports a character encoding, e.g., "Content-Type: text/html; charset=UTF-8". So this is one place the character encoding is reported.

    If the actual content is not an (X)HTML file, the HTTP header might be the only report the user/client/browser receives about the character encoding. Consider accessing a plain text file via HTTP. The text file isn't likely to itself contain information about what character encoding it uses. The HTTP header of "Content-Type: text/plain; charset=UTF-8" might be the only character encoding information that is reported.

    (2) Now, if the content is an (X)HTML page, a charset encoding is often also reported in the content itself, generally in the HTML document's head section in a meta tag such as '' or ''. But consider, this again is a report of a character encoding.

    Consider the case of a program that generates web pages using a boilerplate template still using an ancient default of ISO-8859-1 in the meta charset tag of its head element, even though the body content that goes into the template is being pulled from a database that spits out a default of utf-8. Boom. Mismatch. Janky code is spitting out mismatched and inaccurate character encoding information every day.

    Or to consider web servers. Consider a web server whose config file contains the typo "uft-8" because somebody fat-fingered while updating the config (I've seen this in pages!). Or consider a web server that uses a global default of "utf-8" in its outgoing HTTP headers even when the content being served is a hodge-podge of UTF-8, WINDOWS-1251, WINDOWS-1252, and ISO-8859-1. This too happens all the time.

    I think the most important takeaway is that with both HTTP headers and meta tags, there's no intrinsic link between the character encoding being reported and the actual character encoding of the content. What a Web server tells me and what's in the meta tag in the markup just count as two reports. They might be accurate, they might not be. If it really matters to me what the character encoding is, there's nothing for it but to determine the character encoding myself.

    I have a Hacker News reader, https://www.thnr.net, and my program downloads the URL for every HN story with an outgoing link. Because I'm fastidious and I want to know what a file actually is, I have a function `get_textual_mimetype` that analyzes the content of what the URL's web server sends me. I have seen binary files sent with a "UTF-8" Content-Type header. I have seen UTF-8 files sent with a "inode/x-empty" Content-Type header. So I download the content, and I use `iconv` and `isutf8` to get some information about what encoding it might be. I use `xmlwf` to check if it's well-formed XML. I use `jq` to check whether it's valid JSON. I use `libmagic`. My goal is to determine with a high degree of certainty whether what's been sent to me is an application/pdf, an iamge/webp, a text/html, an application/xhtml+xml, a text/x-csrc, or what. Only a rigorous analysis will tell you the truth. (If anyone is curious, the source for `get_textual_mimetype` is in the repo for my HN reader project: https://github.com/timoteostewart/timbos-hn-reader/blob/main... )

  • libchardet

    libchardet - Mozilla's Universal Charset Detector C/C++ API

  • ??? The work to use one of the many encoding guessing tools https://github.com/Joungkyun/libchardet and then get it correct for almost every document?

    You just look bad if you can't do what every other software is able to do. Charging for it takes that to another level.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • ftfy

    Fixes mojibake and other glitches in Unicode text, after the fact.

  • If you’re actually in a position where you need to guess the encoding, something like “ftfy” <https://github.com/rspeer/python-ftfy> (webapp: <https://ftfy.vercel.app/>) is a perfectly reasonable choice.

    But, you should always do your absolute utmost not to be put in a situation where guessing is your only choice.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • This Week In Python

    5 projects | dev.to | 10 May 2024
  • echo -e doesn't work

    1 project | /r/bash | 27 Apr 2023
  • CS50P WEEK 4 FRANK, IAN AND GLENS LETTERS PSET.

    1 project | /r/cs50 | 17 Nov 2022
  • import fuzzywuzzy

    3 projects | /r/ProgrammerHumor | 22 Feb 2022
  • Making a command-line rpg in python (Day 1)

    1 project | dev.to | 4 Feb 2022