text-short VS unicode-transforms

Compare text-short vs unicode-transforms and see what are their differences.

text-short

Memory-efficient representation of Unicode text strings (by haskell-hvr)

unicode-transforms

Fast Unicode normalization in Haskell (by composewell)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
text-short unicode-transforms
1 1
16 47
- -
5.5 2.5
6 months ago 5 months ago
Haskell Haskell
BSD 3-clause "New" or "Revised" License BSD 3-clause "New" or "Revised" License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

text-short

Posts with mentions or reviews of text-short. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-09-11.
  • CS SYD - JSON Vulnerability in Haskell's Aeson library
    3 projects | /r/haskell | 11 Sep 2021
    Technically, Text for keys could be abstracted away as well. e.g. if someone would like to experiment with https://hackage.haskell.org/package/text-short. (JSON object keys are not sliced that often that 16 byte overhead may make a difference).

unicode-transforms

Posts with mentions or reviews of unicode-transforms. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-04-17.
  • [ANN] unicode-collation 0.1
    3 projects | /r/haskell | 17 Apr 2021
    Thanks! Here's a puzzle. Profiling shows that about a third of the time in my code is spent in normalize from unicode-transforms. (Normalization is a required step in the algorithm but can be omitted if you know that the input is already in NFD form.) And when I add a benchmark that omits normalization, I see run time cut by a third. But text-icu's run time in my benchmark doesn't seem to be affected much by whether I set the normalization option. I am not sure how to square that with the benchmarks here that seem to show unicode-transforms outperforming text-icu in normalization. text-icu's documentation says that "an incremental check is performed to see whether the input data is in FCD form. If the data is not in FCD form, incremental NFD normalization is performed." I'm not sure exactly what this means, but it may mean that text-icu avoids normalizing the whole string, but just normalizes enough to do the comparison, and sometimes avoids normalization altogether if it can quickly determine that the string is already normalized. I don't see a way to do this currently with unicode-transforms.

What are some alternatives?

When comparing text-short and unicode-transforms you can also consider the following projects:

text - Haskell library for space- and time-efficient operations over Unicode text.

text-trie - An efficient finite map from Text to values, based on bytestring-trie.

text-containers

with-utf8 - Get your IO right on the first try

text-stream-decode - Streaming decoding functions for UTF encodings.

text-icu - This package provides the Haskell Data.Text.ICU library, for performing complex manipulation of Unicode text.

text-ansi

text-binary - Binary instances for strict and lazy Text data types

text-time - Fast time parser for Text

text-conversions - Safe conversions between Haskell textual types

hashable - A class for types that can be converted to a hash value