tokenstream
A versatile token stream for handwritten parsers. (by vberlier)
sentence-splitter
Text to sentence splitter using heuristic algorithm by Philipp Koehn and Josh Schroeder. (by mediacloud)
tokenstream | sentence-splitter | |
---|---|---|
2 | 1 | |
12 | 216 | |
- | 0.0% | |
4.8 | 0.0 | |
9 months ago | over 1 year ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tokenstream
Posts with mentions or reviews of tokenstream.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-06-17.
-
vberlier/tokenstream: A versatile token stream for handwritten parsers
The repo has examples with some of the generated error messages. https://github.com/vberlier/tokenstream/blob/main/examples/json.py
sentence-splitter
Posts with mentions or reviews of sentence-splitter.
We have used some of these posts to build our list of alternatives
and similar projects.
-
Text translation question: Helsinki-NLP skips end sentences. Any good open sourced pre-trained models for large text translation?
There are plenty of sentence splitter available, like https://github.com/mediacloud/sentence-splitter for example, but sometimes you'll have to use language specific ones.
What are some alternatives?
When comparing tokenstream and sentence-splitter you can also consider the following projects:
xontrib-output-search - Get identifiers, paths, URLs and words from the previous command output and use them for the next command in xonsh shell.
word-piece-tokenizer - A Lightweight Word Piece Tokenizer
Hebrew-Tokenizer - A very simple python tokenizer for Hebrew text.
bitextor - Bitextor generates translation memories from multilingual websites
spacy-experimental - 🧪 Cutting-edge experimental spaCy components and features