Sentimentanalysis
laserembeddings
Sentimentanalysis | laserembeddings | |
---|---|---|
2 | 2 | |
7 | 223 | |
- | - | |
2.6 | 0.0 | |
about 3 years ago | 9 months ago | |
Python | Python | |
MIT License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Sentimentanalysis
-
[P] Language Independent Sentiment Analysis
See here for details: https://github.com/AOK-PLUS/Sentimentanalysis
- Language independent sentiment analysis using transformers
laserembeddings
-
Firefox Translations doesn't use the cloud
You're pretty much right on the money. For ParaCrawl[1] (which I worked on) we used fast machine translation systems that were "good enough" to translate one side of each pair to the language of the other, see whether they'd match sufficiently, and then deal with all the false positives through various filtering methods. Other datasets I know of use multilingual sentence embeddings, like LASER[2], to compute the distance between two sentences.
Both of these methods have a bootstrapping problem, but at this point in the MT for many languages we have enough data to get started. Previous iterations of ParaCrawl used things like document structure and overlap of named entities among sentences to identify matching pairs. But this is much less robust. I don't know how they solve this problem today for low-resource languages.
[1] https://paracrawl.eu
[2] https://github.com/yannvgn/laserembeddings
-
SpaCy v3.0 Released (Python Natural Language Processing)
I've been using LASER from Facebook Research via https://github.com/yannvgn/laserembeddings to accept multi-lingual input in front of the the domain-specific models for recommendations and stuff (that are trained on English annotated examples).
What are some alternatives?
trankit - Trankit is a Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing
syntaxdot - Neural syntax annotator, supporting sequence labeling, lemmatization, and dependency parsing.
contextualized-topic-models - A python package to run contextualized topic modeling. CTMs combine contextualized embeddings (e.g., BERT) with topic models to get coherent topics. Published at EACL and ACL 2021.
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
OpenPrompt - An Open-Source Framework for Prompt-Learning.
BLINK - Entity Linker solution
PyTorch-NLP - Basic Utilities for PyTorch Natural Language Processing (NLP)
wiktextract - Wiktionary dump file parser and multilingual data extractor
projects - 🪐 End-to-end NLP workflows from prototype to production
duckling - Language, engine, and tooling for expressing, testing, and evaluating composable language rules on input strings.
rules - Durable Rules Engine
kerning-pairs - The ultimate list of kerning pairs for type designers