typed-encoding VS unicode-transforms

Compare typed-encoding vs unicode-transforms and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
typed-encoding unicode-transforms
0 1
6 47
- -
3.3 2.5
5 months ago 5 months ago
Haskell Haskell
BSD 3-clause "New" or "Revised" License BSD 3-clause "New" or "Revised" License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

typed-encoding

Posts with mentions or reviews of typed-encoding. We have used some of these posts to build our list of alternatives and similar projects.

We haven't tracked posts mentioning typed-encoding yet.
Tracking mentions began in Dec 2020.

unicode-transforms

Posts with mentions or reviews of unicode-transforms. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-04-17.
  • [ANN] unicode-collation 0.1
    3 projects | /r/haskell | 17 Apr 2021
    Thanks! Here's a puzzle. Profiling shows that about a third of the time in my code is spent in normalize from unicode-transforms. (Normalization is a required step in the algorithm but can be omitted if you know that the input is already in NFD form.) And when I add a benchmark that omits normalization, I see run time cut by a third. But text-icu's run time in my benchmark doesn't seem to be affected much by whether I set the normalization option. I am not sure how to square that with the benchmarks here that seem to show unicode-transforms outperforming text-icu in normalization. text-icu's documentation says that "an incremental check is performed to see whether the input data is in FCD form. If the data is not in FCD form, incremental NFD normalization is performed." I'm not sure exactly what this means, but it may mean that text-icu avoids normalizing the whole string, but just normalizes enough to do the comparison, and sometimes avoids normalization altogether if it can quickly determine that the string is already normalized. I don't see a way to do this currently with unicode-transforms.

What are some alternatives?

When comparing typed-encoding and unicode-transforms you can also consider the following projects:

typed-encoding-encoding - Bridge between `encoding` and `typed-encoding` package

with-utf8 - Get your IO right on the first try

typed-admin

hashable - A class for types that can be converted to a hash value

refined - Refinement types with static checking

typed-digits - Digits, indexed by their base at the type level

jump - Jump start your Haskell development

hnix - A Haskell re-implementation of the Nix expression language

critbit - A Haskell implementation of crit-bit trees.

code-builder - Packages for defining APIs, running them, generating client code and documentation.

resource-pool - A high-performance striped resource pooling implementation for Haskell

lens - Lenses, Folds, and Traversals - Join us on web.libera.chat #haskell-lens