skweak
modsysML
Our great sponsors
skweak | modsysML | |
---|---|---|
8 | 2 | |
909 | 0 | |
0.2% | - | |
6.2 | 6.3 | |
6 months ago | 11 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
skweak
-
Entity Extraction with Predefined List
Thanks for pointing me in the right direction. Seems like there’s a few other approaches with weak supervision: https://github.com/NorskRegnesentral/skweak
-
[P] Programmatic: Powerful Weak Labeling
Code for https://arxiv.org/abs/2104.09683 found: https://github.com/NorskRegnesentral/skweak
-
Show HN: Programmatic – a REPL for creating labeled data
Hi Raza here, one of the other co-founders.
I know that HN likes to nerd out over technical details so thought I’d share a bit more on how we aggregate the noisy labels to clean them up.
At the moment we use the great Skweak [1] open source library to do this. Skweak uses an HMM to infer the most likely unobserved label given the evidence of the votes from each of the labelling functions.
This whole strategy of first training a label model and then training a neural net was pioneered by Snorkel. We’ve used this approach for now but we actually think there are big opportunities for improvement.
We’re working on an end-to-end approach that de-noises the labelling function and trains the model at the same time. So far we’ve seen improvements on the standard benchmarks [2] and are planning to submit to Neurips.
R
[1]: Skweak package: https://github.com/NorskRegnesentral/skweak
-
The hand-picked selection of the best Python libraries released in 2021
skweak.
- Skweak: Weak Supervision for NLP
-
Inevitable Manual Work involved in NLP
For more advanced unsupervised labeling, you should check skweak
-
How to get Training data for NER?
I'm the main developer behind skweak by the way, happy to hear you're interested in our toolkit :-) We do already have a small list of products (see https://github.com/NorskRegnesentral/skweak/blob/main/data/products.json) extracted from DBPedia and Wikidata, but it may not be exactly the type of products you're looking for.
modsysML
What are some alternatives?
snorkel - A system for quickly generating training data with weak supervision
texta - Terminology EXtraction and Text Analytics (TEXTA) Toolkit
argilla - Argilla is a collaboration platform for AI engineers and domain experts that require high-quality outputs, full data ownership, and overall efficiency.
ludwig - Low-code framework for building custom LLMs, neural networks, and other AI models
DearPy3D - Dear PyGui 3D Engine (prototyping)
awesome-open-gpt - Collection of Open Source Projects Related to GPT,GPT相关开源项目合集🚀、精选🔥🔥
snorkel - A system for quickly generating training data with weak supervision [Moved to: https://github.com/snorkel-team/snorkel]
nannyml - nannyml: post-deployment data science in python
AugLy - A data augmentations library for audio, image, text, and video.
OpenPrompt - An Open-Source Framework for Prompt-Learning.
Text-Summarization-using-NLP - Text Summarization using NLP to fetch BBC News Article and summarize its text and also it includes custom article Summarization
taxonomy-of-concepts-for-AI - Taxonomy of common concepts based on matrices of concepts, for AI improvement