nlpaug
spaCy
Our great sponsors
nlpaug | spaCy | |
---|---|---|
10 | 106 | |
4,252 | 28,751 | |
- | 1.3% | |
0.0 | 9.2 | |
about 1 year ago | about 20 hours ago | |
Jupyter Notebook | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nlpaug
-
Use WordNet to collect homonyms
You'd want to use an NLP method for this as in order to determine optimal homonyms there would have to be some method of deriving context from the words ahead of and behind the substitution. Take a look at nlpaug.
-
Contextual Similarity between a list of n-grams and a website
3) Use deep contextual models with wordpiecing/BPE tokenizers- like all the models: BERT, RoBERTA, etc. On the simpler side, could also swap words with synonyms, which is easy to do with this library: https://github.com/makcedward/nlpaug. Instead of a single n-gram per topic, it might be nice to have a bundle of related words- you could play around with wordnet and see if that's helpful- also easy to do w/ nlpaug.
-
Word embeddings / language models for synonym generation?
In practice, even swapping words with dictionary synonyms is a problem because context isn't considered. Lexically sensitive contextual augmentation has become more popular in the last year or two - basically you mask a token using a large language model and then use the model to predict it so it has the full context. It's imperfect, but it's surprisingly useful when you want to upsample data. Nlpaug has an easy-to-use implementation https://github.com/makcedward/nlpaug
-
Text Data Augmentation using GPT-2 Language Model
A cool library I recently came across for text augmentation is nlpaug, it does a different thing to your approach, but I think both are useful :)
-
[D] Data Augmentation in NLP
This is a nice starting point: https://github.com/makcedward/nlpaug
-
NLPAug: what proportion of augmented sentences do you usually add to the dataset?
Since the dataset is relatively tiny, we are working on augmenting it with NLPAug. We use 2 strategies. Synonymisation and back translation.
-
Show HN: 40k Book Recommendations on HN Extracted Using Deep Learning
Thank you!
The medium post is amazingly written! I basically did the same thing - and you beat me with the data augmentation piece. I tried using nlpaug [0] but it didn't improve the model performance. I'll definitely try swapping book titles around.
[0] https://github.com/makcedward/nlpaug
-
[R] Call for Participation to NL-Augmenter ๐ฆ โ ๐
Are there any shortfalls in nlpaug which justified another project?
-
A Visual Survey of Data Augmentation in NLP
Spelling error injection In this method, we add spelling errors to some random word in the sentence. These spelling errors can be added programmatically or using a mapping of common spelling errors such as this list for English.
spaCy
-
Step by step guide to create customized chatbot by using spaCy (Python NLP library)
Hi Community, In this article, I will demonstrate below steps to create your own chatbot by using spaCy (spaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython):
-
Best AI SEO Tools for NLP Content Optimization
SpaCy: An open-source library providing tools for advanced NLP tasks like tokenization, entity recognition, and part-of-speech tagging.
-
Who has the best documentation youโve seen or like in 2023
spaCy https://spacy.io/
-
A beginnerโs guide to sentiment analysis using OceanBase and spaCy
In this article, I'm going to walk through a sentiment analysis project from start to finish, using open-source Amazon product reviews. However, using the same approach, you can easily implement mass sentiment analysis on your own products. We'll explore an approach to sentiment analysis with one of the most popular Python NLP packages: spaCy.
- Retrieval Augmented Generation (RAG): How To Get AI Models Learn Your Data & Give You Answers
-
Against LLM Maximalism
Spacy [0] is a state-of-art / easy-to-use NLP library from the pre-LLM era. This post is the Spacy founder's thoughts on how to integrate LLMs with the kind of problems that "traditional" NLP is used for right now. It's an advertisement for Prodigy [1], their paid tool for using LLMs to assist data labeling. That said, I think I largely agree with the premise, and it's worth reading the entire post.
The steps described in "LLM pragmatism" are basically what I see my data science friends doing โ it's hard to justify the cost (money and latency) in using LLMs directly for all tasks, and even if you want to you'll need a baseline model to compare against, so why not use LLMs for dataset creation or augmentation in order to train a classic supervised model?
[0] https://spacy.io/
[1] https://prodi.gy/
- Swirl: An open-source search engine with LLMs and ChatGPT to provide all the answers you need ๐
-
How to predict this sequence?
spaCy
-
What do you all think about (setq sentence-end-double-space nil)?
I chose spacy. Although it's not state of the art, it's very well established and stable.
- spaCy: Industrial-Strength Natural Language Processing
What are some alternatives?
NL-Augmenter - NL-Augmenter ๐ฆ โ ๐ A Collaborative Repository of Natural Language Transformations
TextBlob - Simple, Pythonic, text processing--Sentiment analysis, part-of-speech tagging, noun phrase extraction, translation, and more.
Tic-Tac-Toe-Gym - This is the Tic-Tac-Toe game made with Python using the PyGame library and the Gym library to implement the AI with Reinforcement Learning
Stanza - Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
azureml-examples - Official community-driven Azure Machine Learning examples, tested with GitHub Actions.
NLTK - NLTK Source
advertorch - A Toolbox for Adversarial Robustness Research
BERT-NER - Pytorch-Named-Entity-Recognition-with-BERT
dopamine - Dopamine is a research framework for fast prototyping of reinforcement learning algorithms.
polyglot - Multilingual text (NLP) processing toolkit
SuiSense - Using Artificial Intelligence to distinguish between suicidal and depressive messages (4th Place Congressional App Challenge)
textacy - NLP, before and after spaCy