skweak
finetuner
Our great sponsors
skweak | finetuner | |
---|---|---|
8 | 36 | |
909 | 1,424 | |
0.2% | 1.8% | |
6.2 | 5.5 | |
6 months ago | about 2 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
skweak
-
Entity Extraction with Predefined List
Thanks for pointing me in the right direction. Seems like there’s a few other approaches with weak supervision: https://github.com/NorskRegnesentral/skweak
-
[P] Programmatic: Powerful Weak Labeling
Code for https://arxiv.org/abs/2104.09683 found: https://github.com/NorskRegnesentral/skweak
-
Show HN: Programmatic – a REPL for creating labeled data
Hi Raza here, one of the other co-founders.
I know that HN likes to nerd out over technical details so thought I’d share a bit more on how we aggregate the noisy labels to clean them up.
At the moment we use the great Skweak [1] open source library to do this. Skweak uses an HMM to infer the most likely unobserved label given the evidence of the votes from each of the labelling functions.
This whole strategy of first training a label model and then training a neural net was pioneered by Snorkel. We’ve used this approach for now but we actually think there are big opportunities for improvement.
We’re working on an end-to-end approach that de-noises the labelling function and trains the model at the same time. So far we’ve seen improvements on the standard benchmarks [2] and are planning to submit to Neurips.
R
[1]: Skweak package: https://github.com/NorskRegnesentral/skweak
-
The hand-picked selection of the best Python libraries released in 2021
skweak.
- Skweak: Weak Supervision for NLP
-
Inevitable Manual Work involved in NLP
For more advanced unsupervised labeling, you should check skweak
-
How to get Training data for NER?
I'm the main developer behind skweak by the way, happy to hear you're interested in our toolkit :-) We do already have a small list of products (see https://github.com/NorskRegnesentral/skweak/blob/main/data/products.json) extracted from DBPedia and Wikidata, but it may not be exactly the type of products you're looking for.
finetuner
-
How do you think search will change with technology like ChatGPT, Bing’s new AI search engine and the upcoming Google Bard?
And all of that has something to do with finetuners. It basically fine-tunes AI models for specific use cases. With it can create a custom search experience that is tailored to their specific needs. I also wonder how this is going to be integrated into SEO tools soon since those tools are catered to traditional search engines.
-
Combining multiple lists into one, meaningfully
Combining multiple lists into one is tough, but it's doable if you have the right approach. Fine-tuning GPT-3 might help, but finding enough examples is tough. You could use existing text data or manually label a set of training examples. A finetuner could be help too. It's a platform-agnostic toolkit that can fine-tune pre-trained models and it's customizable to do lots of tasks.
-
speech_recognition not able to convert the full live audio to text. Please help me to fine-tune it.
You can adjust the pause threshold a little longer for pauses between and phrases. You can also use the phrase detection mode, which sets a time limit for the entire phrase instead of ending the transcription prematurely. If your microphone sensitivity is low, you can also try adjusting the energy threshold. If you want, you can use finetuners.
-
Questions about fine-tuned results. Should the completion results be identical to fine-tune examples?
It's possible that completion results may be identical to fine-tuned examples, but not guaranteed. Even with the same prompt, slight variations in output are expected due to the nature of probabilistic language models. You can experiment with different settings and parameters, including those with finetuners like these.
-
How can I create a dataset to refine Whisper AI from old videos with subtitles?
You can try creating your own dataset. Get some audio data that you want, preprocess it, and then create a custom dataset you can use to fine tune. You could use finetuners like these if you want as well.
-
A Guide to Using OpenTelemetry in Jina for Monitoring and Tracing Applications
We derived the dataset by pre-processing the deepfashion dataset using Finetuner. The image label generated by Finetuner is extracted and formatted to produce the text attribute of each product.
-
[D] Looking for an open source Downloadable model to run on my local device.
You can either use Hugging Face Transformers as they have a lot of pre-trained models that you can customize. Or Finetuners like this one: which is a toolkit for fine-tuning multiple models.
-
Improving Search Quality for Non-English Queries with Fine-tuned Multilingual CLIP Models
Very recently, a few non-English and multilingual CLIP models have appeared, using various sources of training data. In this article, we’ll evaluate a multilingual CLIP model’s performance in a language other than English, and show how you can improve it even further using Jina AI’s Finetuner.
-
Is there a way I can feed the gpt3 model database object like tables? I know we can create fine tune model but not sure about the completion part. Please help!
I think you can convert your data into text and fine-tune the model on it. But that might not be the ideal way to go since you kind of base that on the model. Try transfer learning or finetuning with a finetuner.
-
Classification using prompt or fine tuning?
you can try prompt-based classification or fine-tuning with a Finetuner. Prompts work well for simple tasks but fine-tuning may give better results for complex ones. Althouigh it's going to need more resources, but try both and see what works best for you.
What are some alternatives?
snorkel - A system for quickly generating training data with weak supervision
gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]
argilla - Argilla is a collaboration platform for AI engineers and domain experts that require high-quality outputs, full data ownership, and overall efficiency.
Jina AI examples - Jina examples and demos to help you get started
DearPy3D - Dear PyGui 3D Engine (prototyping)
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
snorkel - A system for quickly generating training data with weak supervision [Moved to: https://github.com/snorkel-team/snorkel]
jina - ☁️ Build multimodal AI applications with cloud-native stack
AugLy - A data augmentations library for audio, image, text, and video.
Promptify - Prompt Engineering | Prompt Versioning | Use GPT or other prompt based models to get structured output. Join our discord for Prompt-Engineering, LLMs and other latest research
Text-Summarization-using-NLP - Text Summarization using NLP to fetch BBC News Article and summarize its text and also it includes custom article Summarization
pysot - SenseTime Research platform for single object tracking, implementing algorithms like SiamRPN and SiamMask.