snorkel
spaCy

snorkel | spaCy | |
---|---|---|
6 | 109 | |
5,828 | 30,917 | |
0.0% | 1.2% | |
5.2 | 9.0 | |
10 months ago | 14 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
snorkel
-
Harnessing Weak Supervision to Isolate Sign Language in Crowded News Videos
Hello everyone, we are trying to make a large dataset for Sign Language translation, inspired by BSL-1K [1]. As part of cleaning our collected videos, we use a nice technique for aggregating heuristic labels [2]. We thought it was interesting enough to share with people on here.
[1] https://www.robots.ox.ac.uk/~vgg/research/bsl1k/
[2] https://github.com/snorkel-team/snorkel
-
[P] We are building a curated list of open source tooling for data-centric AI workflows, looking for contributions.
The paid product came out of an open source tool: https://github.com/snorkel-team/snorkel
- [Discussion] - "data sourcing will be more important than model building in the era of foundational model fine-tuning"
-
Can't use load_data from utils
Actually, I referenced it in my issue as well. There seems to be different utils.py file in different folders under the snorkel-tutorials repo but the utils file we get after importing snorkel has a different [file](https://github.com/snorkel-team/snorkel/blob/master/snorkel/utils/core.py) ,i.e. the utils file is different in the main snorkel repo
- [D] A hand-picked selection of the best Python ML Libraries of 2021
-
[Discussion] Methods for enhancing high-quality dataset A with low-quality dataset
Snorkel (https://github.com/snorkel-team/snorkel) might provide you exactly what you are looking for. From the docs:
spaCy
- SpaCy ā Industrial-Strength Natural Language Processing in Python
-
350M Tokens Don't Lie: Love and Hate in Hacker News
Is this just using LLM to be cool? How does pure LLM with simple "In the scale between 0-10"" stack up against traditional, battle-tested sentiment analysis tools?
Gemini suggests NLTK and spaCy
https://www.nltk.org/
https://spacy.io/
- How I discovered Named Entity Recognition while trying to remove gibberish from a string.
-
Step by step guide to create customized chatbot by using spaCy (Python NLP library)
Hi Community, In this article, I will demonstrate below steps to create your own chatbot by using spaCy (spaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython):
-
Best AI SEO Tools for NLP Content Optimization
SpaCy: An open-source library providing tools for advanced NLP tasks like tokenization, entity recognition, and part-of-speech tagging.
-
Who has the best documentation youāve seen or like in 2023
spaCy https://spacy.io/
-
A beginnerās guide to sentiment analysis using OceanBase and spaCy
In this article, I'm going to walk through a sentiment analysis project from start to finish, using open-source Amazon product reviews. However, using the same approach, you can easily implement mass sentiment analysis on your own products. We'll explore an approach to sentiment analysis with one of the most popular Python NLP packages: spaCy.
- Retrieval Augmented Generation (RAG): How To Get AI Models Learn Your Data & Give You Answers
-
Against LLM Maximalism
Spacy [0] is a state-of-art / easy-to-use NLP library from the pre-LLM era. This post is the Spacy founder's thoughts on how to integrate LLMs with the kind of problems that "traditional" NLP is used for right now. It's an advertisement for Prodigy [1], their paid tool for using LLMs to assist data labeling. That said, I think I largely agree with the premise, and it's worth reading the entire post.
The steps described in "LLM pragmatism" are basically what I see my data science friends doing ā it's hard to justify the cost (money and latency) in using LLMs directly for all tasks, and even if you want to you'll need a baseline model to compare against, so why not use LLMs for dataset creation or augmentation in order to train a classic supervised model?
[0] https://spacy.io/
[1] https://prodi.gy/
- Swirl: An open-source search engine with LLMs and ChatGPT to provide all the answers you need š
What are some alternatives?
cleanlab - The standard data-centric AI package for data quality and machine learning with messy, real-world data and labels.
Stanza - Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
argilla - Argilla is a collaboration tool for AI engineers and domain experts to build high-quality datasets
NLTK - NLTK Source
skweak - skweak: A software toolkit for weak supervision applied to NLP tasks
TextBlob - Simple, Pythonic, text processing--Sentiment analysis, part-of-speech tagging, noun phrase extraction, translation, and more.
weasel - Weakly Supervised End-to-End Learning (NeurIPS 2021)
Jieba - ē»å·“äøęåčÆ
BotLibre - An open platform for artificial intelligence, chat bots, virtual agents, social media automation, and live chat automation.
polyglot - Multilingual text (NLP) processing toolkit
snorkel-tutorials - A collection of tutorials for Snorkel
textacy - NLP, before and after spaCy
