auto-text VS chardet

Compare auto-text vs chardet and see what are their differences.

auto-text

Automatic (encoding, end of line, column width, etc) detection for text files. 100% Common Lisp. (by defunkydrummer)

chardet

Python character encoding detector (by chardet)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
auto-text chardet
1 8
10 2,071
- 1.2%
10.0 2.9
about 5 years ago 6 months ago
Common Lisp Python
MIT License GNU Lesser General Public License v3.0 only
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

auto-text

Posts with mentions or reviews of auto-text. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-11-24.
  • Encoding detection
    5 projects | /r/Common_Lisp | 24 Nov 2021
    auto-text - automatic (encoding, end of line, column width, csv delimiter etc) detection for text files. [MIT][200]. See also inquisitor for detection of asian and far eastern languages.

chardet

Posts with mentions or reviews of chardet. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-11-05.
  • After almost a year, Ben Eater is back
    2 projects | news.ycombinator.com | 5 Nov 2022
  • 3 Ways to Handle non UTF-8 Characters in Pandas
    1 project | dev.to | 20 Jan 2022
    chardet is a library for decoding characters, once installed you can use the following to determine encoding:
  • In MySQL, never use “utf8”. Use “utf8mb4”
    8 projects | news.ycombinator.com | 12 Jan 2022
    The craziest issue I had was I couldn't predict what char encoding the text in my database was in. Most users entered Windows-1252, some text blobs were UTF-16, others were European character sets, and some were UTF-8. Some were Japanese SHIFT_JIS. Don't ask me how any of this happened. I retrospect, I should have dumped all the tables from MySQL and used the excellent PyPy Chardet [1] library to see what I was dealing with, do the conversions and then re-import the data. But then someone could copy UTF-16 from a Windows document and paste it in, so you have to convert going in to the database.

    You have set Apache to UTF-8, PHP to UTF-8, MySQL to UTF-8, and the MySQL driver you are using to UTF-8. It's not clear how these setting interact. Are there silent conversions happening or do you always have to detect the encoding on data coming from the server? HTML pages have a character encoding specifier, but the BOM at the start of the file takes precedence (I think.) I got it to work by always detecting encoding for any text coming from the database and using iconv, but this turned out to be really slow and unreliable. It was truly the biggest mess by an order of magnitude than any other programming problem I faced in my career.

    Would not attempt again.

    [1] https://github.com/chardet/chardet

  • Encoding detection
    5 projects | /r/Common_Lisp | 24 Nov 2021
    I found there is a https://github.com/chardet/chardet python library, which can be ported to Common Lisp.
  • How to convert cmd output to UTF-8
    1 project | /r/learnpython | 30 Sep 2021
    Then use chardet to determine the encoding from the content
  • Everything to know about Requests v2.26.0
    5 projects | dev.to | 13 Jul 2021
    The library that Requests uses for content encoding detection has for the past 10 years been chardet which is licensed LGPL-2.1.
  • PyWhat: Identify Anything
    8 projects | news.ycombinator.com | 16 Jun 2021
  • UTF-8 is not enough? Requesting help with an open source project!
    1 project | /r/django | 26 Feb 2021

What are some alternatives?

When comparing auto-text and chardet you can also consider the following projects:

inquisitor - Encoding/end-of-line detection and external-format abstraction for Common Lisp

Charset Normalizer - Truly universal encoding detector in pure Python

tika-docker - Convenience Docker images for Apache Tika Server

fuzzywuzzy - Fuzzy String Matching in Python

ftfy - Fixes mojibake and other glitches in Unicode text, after the fact.

Levenshtein - The Levenshtein Python C extension module contains functions for fast computation of Levenshtein distance and string similarity

shortuuid - A generator library for concise, unambiguous and URL-safe UUIDs.

pyfiglet - An implementation of figlet written in Python

uniout - Never see escaped bytes in output.

pangu.py - Paranoid text spacing in Python

xpinyin - Translate Chinese hanzi to pinyin (拼音) by Python, 汉字转拼音

ijson