rake-nltk
flashtext
rake-nltk | flashtext | |
---|---|---|
4 | 8 | |
1,060 | 5,596 | |
- | - | |
0.0 | 0.0 | |
almost 2 years ago | 4 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rake-nltk
- rake-nltk 1.0.6 released. Comes with the flexibility to choose your own sentence and word tokenizers.
-
PMI for WordClouds
I'm not sure what you mean by tokenizing phrases or concepts. Specifically extracting institution names would fall under NER. You can do this with spaCy. Extracting commonly used phrases would fall under keyword extraction. For this, you can study frequencies of n-grams of length > 1 and optionally filter based on POS (i.e. NOUN+ADJ). I've never used RAKE (https://github.com/csurfer/rake-nltk) but I've heard this is also a popular method.
flashtext
-
Show HN: LLMs can generate valid JSON 100% of the time
I have some other comment on this thread where I point out why I don’t think it’s superficial. Would love to get your feedback on that if you feel like spending more time on this thread.
But it’s not obscure? FlashText was a somewhat popular paper at the time (2017) with a popular repo (https://github.com/vi3k6i5/flashtext). Their paper was pretty derivative of Aho-Corasick, which they cited. If you think they genuinely fucked up, leave an issue on their repo (I’m, maybe to your surprise lol, not the author).
Anyway, I’m not a fan of the whatabboutery here. I don’t think OG’s paper is up to snuff on its lit review - do you?
-
[P] what is the most efficient way to pattern matching word-to-word?
The library flashtext basically creates these tries based on keywords you give it.
-
What is the most efficient way to find substrings in strings?
Seems like https://github.com/vi3k6i5/flashtext would be better suited here.
-
[P] Library for end-to-end neural search pipelines
I started developing this tool after using haystack. Pipelines are easier to build with cherche because of the operators. Also, cherche offers FlashText, Lunr.py retrievers that are not available in Haystack and that I needed for the project I wanted to solve. Haystack is clearly more complete but I think also more complex to use.
-
How can I speed up thousands of re.subs()?
For the text part not requiring regex, https://github.com/vi3k6i5/flashtext might help
-
My first NLP pipeline using SpaCy: detect news headlines with company acquisitions
Spacy for parsing the Headlines, remove stop words etc. might be ok but I think the problem is quite narrow so a set of fixed regex searches might work quite well. If regex is too slow, try: https://github.com/vi3k6i5/flashtext
-
What tech do I need to learn to programmatically parse ingredients from a recipe?
I would probably use something like [flashtext](https://github.com/vi3k6i5/flashtext) which should not be too hard to port to kotlin.
- Quickest way to check that 14000 strings arent in An original string.
What are some alternatives?
yake - Single-document unsupervised keyword extraction
KeyBERT - Minimal keyword extraction with BERT
pke - Python Keyphrase Extraction module
magnitude - A fast, efficient universal vector embedding utility package.
NLTK - NLTK Source
Optimus - :truck: Agile Data Preparation Workflows made easy with Pandas, Dask, cuDF, Dask-cuDF, Vaex and PySpark
WordDumb - A calibre plugin that generates Kindle Word Wise and X-Ray files for KFX, AZW3, MOBI and EPUB eBook.
simple_keyword_clusterer - A simple machine learning package to cluster keywords in higher-level groups.
gensim - Topic Modelling for Humans
hepscrape - arXiv:hep-ph scraper
AnnA_Anki_neuronal_Appendix - Using machine learning on your anki collection to enhance the scheduling via semantic clustering and semantic similarity