ftfy
chardet
ftfy | chardet | |
---|---|---|
2 | 8 | |
3,715 | 2,076 | |
0.7% | 0.6% | |
5.5 | 2.9 | |
21 days ago | 6 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU Lesser General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ftfy
-
You can't just assume UTF-8
If you’re actually in a position where you need to guess the encoding, something like “ftfy” <https://github.com/rspeer/python-ftfy> (webapp: <https://ftfy.vercel.app/>) is a perfectly reasonable choice.
But, you should always do your absolute utmost not to be put in a situation where guessing is your only choice.
-
7 Useful Python Libraries You Should Use in Your Next Project
ftfy
chardet
- After almost a year, Ben Eater is back
-
3 Ways to Handle non UTF-8 Characters in Pandas
chardet is a library for decoding characters, once installed you can use the following to determine encoding:
-
In MySQL, never use “utf8”. Use “utf8mb4”
The craziest issue I had was I couldn't predict what char encoding the text in my database was in. Most users entered Windows-1252, some text blobs were UTF-16, others were European character sets, and some were UTF-8. Some were Japanese SHIFT_JIS. Don't ask me how any of this happened. I retrospect, I should have dumped all the tables from MySQL and used the excellent PyPy Chardet [1] library to see what I was dealing with, do the conversions and then re-import the data. But then someone could copy UTF-16 from a Windows document and paste it in, so you have to convert going in to the database.
You have set Apache to UTF-8, PHP to UTF-8, MySQL to UTF-8, and the MySQL driver you are using to UTF-8. It's not clear how these setting interact. Are there silent conversions happening or do you always have to detect the encoding on data coming from the server? HTML pages have a character encoding specifier, but the BOM at the start of the file takes precedence (I think.) I got it to work by always detecting encoding for any text coming from the database and using iconv, but this turned out to be really slow and unreliable. It was truly the biggest mess by an order of magnitude than any other programming problem I faced in my career.
Would not attempt again.
[1] https://github.com/chardet/chardet
-
Encoding detection
I found there is a https://github.com/chardet/chardet python library, which can be ported to Common Lisp.
-
How to convert cmd output to UTF-8
Then use chardet to determine the encoding from the content
-
Everything to know about Requests v2.26.0
The library that Requests uses for content encoding detection has for the past 10 years been chardet which is licensed LGPL-2.1.
- PyWhat: Identify Anything
- UTF-8 is not enough? Requesting help with an open source project!
What are some alternatives?
fuzzywuzzy - Fuzzy String Matching in Python
Charset Normalizer - Truly universal encoding detector in pure Python
xpinyin - Translate Chinese hanzi to pinyin (拼音) by Python, 汉字转拼音
pyfiglet - An implementation of figlet written in Python
Levenshtein - The Levenshtein Python C extension module contains functions for fast computation of Levenshtein distance and string similarity
shortuuid - A generator library for concise, unambiguous and URL-safe UUIDs.
pangu.py - Paranoid text spacing in Python
ijson
uniout - Never see escaped bytes in output.