rules
laserembeddings
Our great sponsors
rules | laserembeddings | |
---|---|---|
1 | 2 | |
1,106 | 223 | |
- | - | |
0.0 | 0.0 | |
over 1 year ago | 9 months ago | |
JavaScript | Python | |
MIT License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rules
-
SpaCy v3.0 Released (Python Natural Language Processing)
Currently https://github.com/nilp0inter/experta but https://github.com/noxdafox/clipspy seems nice, I just shied away from using it due to uneasiness about FFI and debugging, even though the original CLIPS is still awesome and has a very interesting manual.
There's also https://github.com/jruizgit/rules but haven't tried it yet.
laserembeddings
-
Firefox Translations doesn't use the cloud
You're pretty much right on the money. For ParaCrawl[1] (which I worked on) we used fast machine translation systems that were "good enough" to translate one side of each pair to the language of the other, see whether they'd match sufficiently, and then deal with all the false positives through various filtering methods. Other datasets I know of use multilingual sentence embeddings, like LASER[2], to compute the distance between two sentences.
Both of these methods have a bootstrapping problem, but at this point in the MT for many languages we have enough data to get started. Previous iterations of ParaCrawl used things like document structure and overlap of named entities among sentences to identify matching pairs. But this is much less robust. I don't know how they solve this problem today for low-resource languages.
[1] https://paracrawl.eu
[2] https://github.com/yannvgn/laserembeddings
-
SpaCy v3.0 Released (Python Natural Language Processing)
I've been using LASER from Facebook Research via https://github.com/yannvgn/laserembeddings to accept multi-lingual input in front of the the domain-specific models for recommendations and stuff (that are trained on English annotated examples).
What are some alternatives?
json-rules-engine - A rules engine expressed in JSON
syntaxdot - Neural syntax annotator, supporting sequence labeling, lemmatization, and dependency parsing.
duckling - Language, engine, and tooling for expressing, testing, and evaluating composable language rules on input strings.
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
Kornia - Geometric Computer Vision Library for Spatial AI
BLINK - Entity Linker solution
projects - 🪐 End-to-end NLP workflows from prototype to production
wiktextract - Wiktionary dump file parser and multilingual data extractor