anime_wordclouds
NLTK
anime_wordclouds | NLTK | |
---|---|---|
2 | 68 | |
9 | 13,606 | |
- | 0.9% | |
4.2 | 9.4 | |
over 1 year ago | about 15 hours ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
anime_wordclouds
-
Most used words in the anime, according to the english subtitles (Word Cloud)
I've generated this word cloud using the english subtitles. The size of each word represent "number" of uses in the serie. I've removed some stopwords : the, i, you, etc. More infos here : https://github.com/TheRaphael0000/anime_wordclouds
-
[OC] Word cloud from Cowboy Bebop English subtitles
Source code : https://github.com/TheRaphael0000/anime_wordclouds
NLTK
-
Create a Question/Answer Chatbot in Python
Using the NTLK Natural Language Toolkit
- NLTK version 3.8.2 is no longer available on PyPI
- Nltk version 3.8.2 is no longer available on PyPI
-
350M Tokens Don't Lie: Love and Hate in Hacker News
Is this just using LLM to be cool? How does pure LLM with simple "In the scale between 0-10"" stack up against traditional, battle-tested sentiment analysis tools?
Gemini suggests NLTK and spaCy
https://www.nltk.org/
https://spacy.io/
-
Building a local AI smart Home Assistant
alternatively, could we not simply split by common characters such as newlines and periods, to split it within sentences? it would be fragile with special handling required for numbers with decimal points and probably various other edge cases, though.
there are also Python libraries meant for natural language parsing[0] that could do that task for us. I even see examples on stack overflow[1] that simply split text into sentences.
[0]: https://www.nltk.org/
-
Sorry if this is a dumb question but is the main idea behind LLMs to output text based on user input?
Check out https://www.nltk.org/ and work through it, it'll give you a foundational understanding of how all this works, but very basically it's just a fancy auto-complete.
-
Best Portfolio Projects for Data Science
NLTK Documentation
- Where to start learning NLP ?
-
Is there a programmatic way to check if two strings are paraphrased?
If this is True, then you need also Natural Language Toolkit to process the words.
-
[CROSS-POST] What programming language should I learn for corpus linguistics?
In that case, you should definitely have a look at Python's nltk library which stands for Natural Language Toolkit. They have a rich corpus collection for all kinds of specialized things like grammars, taggers, chunkers, etc.
What are some alternatives?
practice_python_projects - Book on basic to intermediate level Python projects
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
subreddit-analyzer - A comprehensive Data and Text Mining workflow for submissions and comments from any given public subreddit.
TextBlob - Simple, Pythonic, text processing--Sentiment analysis, part-of-speech tagging, noun phrase extraction, translation, and more.
cltk - The Classical Language Toolkit
bert - TensorFlow code and pre-trained models for BERT
rake-nltk - Python implementation of the Rapid Automatic Keyword Extraction algorithm using NLTK.
Stanza - Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
custom-subs - Anime Subs
polyglot - Multilingual text (NLP) processing toolkit
chat-miner - Parsers and visualizations for chats
PyTorch-NLP - Basic Utilities for PyTorch Natural Language Processing (NLP)