chardet VS hachoir

Compare chardet vs hachoir and see what are their differences.

chardet

Python character encoding detector (by chardet)

hachoir

Hachoir is a Python library to view and edit a binary stream field by field (by vstinner)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
chardet hachoir
8 3
2,071 586
1.2% -
2.9 6.4
6 months ago 2 months ago
Python Python
GNU Lesser General Public License v3.0 only GNU General Public License v3.0 only
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

chardet

Posts with mentions or reviews of chardet. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-11-05.
  • After almost a year, Ben Eater is back
    2 projects | news.ycombinator.com | 5 Nov 2022
  • 3 Ways to Handle non UTF-8 Characters in Pandas
    1 project | dev.to | 20 Jan 2022
    chardet is a library for decoding characters, once installed you can use the following to determine encoding:
  • In MySQL, never use “utf8”. Use “utf8mb4”
    8 projects | news.ycombinator.com | 12 Jan 2022
    The craziest issue I had was I couldn't predict what char encoding the text in my database was in. Most users entered Windows-1252, some text blobs were UTF-16, others were European character sets, and some were UTF-8. Some were Japanese SHIFT_JIS. Don't ask me how any of this happened. I retrospect, I should have dumped all the tables from MySQL and used the excellent PyPy Chardet [1] library to see what I was dealing with, do the conversions and then re-import the data. But then someone could copy UTF-16 from a Windows document and paste it in, so you have to convert going in to the database.

    You have set Apache to UTF-8, PHP to UTF-8, MySQL to UTF-8, and the MySQL driver you are using to UTF-8. It's not clear how these setting interact. Are there silent conversions happening or do you always have to detect the encoding on data coming from the server? HTML pages have a character encoding specifier, but the BOM at the start of the file takes precedence (I think.) I got it to work by always detecting encoding for any text coming from the database and using iconv, but this turned out to be really slow and unreliable. It was truly the biggest mess by an order of magnitude than any other programming problem I faced in my career.

    Would not attempt again.

    [1] https://github.com/chardet/chardet

  • Encoding detection
    5 projects | /r/Common_Lisp | 24 Nov 2021
    I found there is a https://github.com/chardet/chardet python library, which can be ported to Common Lisp.
  • How to convert cmd output to UTF-8
    1 project | /r/learnpython | 30 Sep 2021
    Then use chardet to determine the encoding from the content
  • Everything to know about Requests v2.26.0
    5 projects | dev.to | 13 Jul 2021
    The library that Requests uses for content encoding detection has for the past 10 years been chardet which is licensed LGPL-2.1.
  • PyWhat: Identify Anything
    8 projects | news.ycombinator.com | 16 Jun 2021
  • UTF-8 is not enough? Requesting help with an open source project!
    1 project | /r/django | 26 Feb 2021

hachoir

Posts with mentions or reviews of hachoir. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-15.
  • Magika: AI powered fast and efficient file type identification
    15 projects | news.ycombinator.com | 15 Feb 2024
    https://github.com/vstinner/hachoir/blob/main/hachoir/subfil...

    File signature:

  • Kaitai Struct: A new way to develop parsers for binary structures
    12 projects | news.ycombinator.com | 17 Mar 2022
    I contributed a number of file formats a few years ago (and attempted numerous others) but ran into a number of problems with certain file formats:

    1. It's not possible to read from the file until a multiple byte termination sequence is detected. [1]

    2. You can't read sections of a file where the termination condition is the presence of a sequence of bytes denoting the next unrelated section of the file (and you don't want to consume/read these bytes) [2]

    3. The WebIDE at the time couldn't handle very large file format specifications such as Photoshop (PSD) [3]

    4. Files containing compressed or encrypted sections require a compression/encryption algorithm to be hardcoded into Kaitai struct libraries for each programming language it can output to.

    The WebIDE I particularly liked as it makes it easy to get started and share results. I also liked how Kaitai Struct allows easy definition of constraints (simple ones at least) into the file format specification so that you can say "this section of the file shall have a size not exceeding header.length * 2 bytes".

    Some alternative binary file format specification attempts for those interested in seeing alternatives, each with their own set of problems/pros/cons:

    1. 010 Editor [4]

    2. Synalysis [5]

    3. hachoir [6]

    4. DFDL [7]

    [1] https://github.com/kaitai-io/kaitai_struct/issues/158

    [2] https://github.com/kaitai-io/kaitai_struct/issues/156

    [3] https://raw.githubusercontent.com/davidhicks/kaitai_struct_f...

    [4] https://www.sweetscape.com/010editor/repository/templates/

    [5] https://github.com/synalysis/Grammars

    [6] https://github.com/vstinner/hachoir/tree/main/hachoir/parser

    [7] https://github.com/DFDLSchemas/

  • PyWhat: Identify Anything
    8 projects | news.ycombinator.com | 16 Jun 2021
    Another one sort of related is hachoir, and specifically the hachoir-metadata script: https://github.com/vstinner/hachoir

What are some alternatives?

When comparing chardet and hachoir you can also consider the following projects:

Charset Normalizer - Truly universal encoding detector in pure Python

binrw - A Rust crate for helping parse and rebuild binary data using ✨macro magic✨.

fuzzywuzzy - Fuzzy String Matching in Python

usaddress - :us: a python library for parsing unstructured United States address strings into address components

ftfy - Fixes mojibake and other glitches in Unicode text, after the fact.

fuckitjs - The Original Javascript Error Steamroller

Levenshtein - The Levenshtein Python C extension module contains functions for fast computation of Levenshtein distance and string similarity

pyWhat - 🐸 Identify anything. pyWhat easily lets you identify emails, IP addresses, and more. Feed it a .pcap file or some text and it'll tell you what it is! 🧙‍♀️

shortuuid - A generator library for concise, unambiguous and URL-safe UUIDs.

probablepeople - :family: a python library for parsing unstructured western names into name components.

pyfiglet - An implementation of figlet written in Python

smm2-documentation - Documentation for the game Super Mario Maker 2.