witokit
laserembeddings
witokit | laserembeddings | |
---|---|---|
1 | 2 | |
9 | 223 | |
- | - | |
2.6 | 0.0 | |
over 3 years ago | 9 months ago | |
Python | Python | |
MIT License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
witokit
laserembeddings
-
Firefox Translations doesn't use the cloud
You're pretty much right on the money. For ParaCrawl[1] (which I worked on) we used fast machine translation systems that were "good enough" to translate one side of each pair to the language of the other, see whether they'd match sufficiently, and then deal with all the false positives through various filtering methods. Other datasets I know of use multilingual sentence embeddings, like LASER[2], to compute the distance between two sentences.
Both of these methods have a bootstrapping problem, but at this point in the MT for many languages we have enough data to get started. Previous iterations of ParaCrawl used things like document structure and overlap of named entities among sentences to identify matching pairs. But this is much less robust. I don't know how they solve this problem today for low-resource languages.
[1] https://paracrawl.eu
[2] https://github.com/yannvgn/laserembeddings
-
SpaCy v3.0 Released (Python Natural Language Processing)
I've been using LASER from Facebook Research via https://github.com/yannvgn/laserembeddings to accept multi-lingual input in front of the the domain-specific models for recommendations and stuff (that are trained on English annotated examples).
What are some alternatives?
wit - WIT (Wikipedia-based Image Text) Dataset is a large multimodal multilingual dataset comprising 37M+ image-text sets with 11M+ unique images across 100+ languages.
syntaxdot - Neural syntax annotator, supporting sequence labeling, lemmatization, and dependency parsing.
wiki_dump - A library that assists in traversing and downloading from Wikimedia Data Dumps and their mirrors.
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
wikiteam - Tools for downloading and preserving wikis. We archive wikis, from Wikipedia to tiniest wikis. As of 2023, WikiTeam has preserved more than 350,000 wikis.
BLINK - Entity Linker solution
wp2git - Downloads and imports Wikipedia page histories to a git repository
wiktextract - Wiktionary dump file parser and multilingual data extractor
projects - 🪐 End-to-end NLP workflows from prototype to production
duckling - Language, engine, and tooling for expressing, testing, and evaluating composable language rules on input strings.
rules - Durable Rules Engine
kerning-pairs - The ultimate list of kerning pairs for type designers