dataqa
transformers
Our great sponsors
dataqa | transformers | |
---|---|---|
7 | 175 | |
245 | 124,557 | |
- | 2.7% | |
6.2 | 10.0 | |
almost 2 years ago | 6 days ago | |
JavaScript | Python | |
GNU General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dataqa
-
[D] Looking for open source projects to contribute
Hey, I am the creator and (only contributor today) of open-source https://github.com/dataqa/dataqa, a Python library to explore and annotate documents. It uses weak supervision, is based on spacy, and has a lot of opportunities to add more deep learning and ML functionality. I can guide you through it :-). This would be a great opportunity to be first and lead contributor of an open-source library (outside the creator).
-
[P]: Extract and label data from Wikipedia with DataQA
I recently added a new feature to DataQA (https://github.com/dataqa/dataqa) to be able to extract entities from Wikipedia. All you need to do is upload a file with Wikipedia urls:
-
Show HN: DataQA – now possible to link entities to large ontologies
The open-source project is here: https://github.com/dataqa/dataqa. I have just released a feature which I have been working on for a while to solve a problem which I've seen a lot in industry: how to map entities found in text to large knowledge base ontologies.
-
[P] Using rules to speed up labelling by 2x
The tool I developed and used for this problem: https://github.com/dataqa/dataqa
-
The First Rule of Machine Learning: Start Without Machine Learning
I have seen first hand at small and large companies how problems have been tackled with ML without trying a simple rule or heuristic first. And then, further down the line, the system has been compared to a few business rules put together, to find that the difference in performance did not explain the deployment of an ML system in the first place.
It's true that if your rules grow in complexity, this might make it harder to maintain, but the good thing about rules is that they tend to be fully explainable, and they can be encoded by domain experts. So the maintenance of such a system does not need to be done exclusively by an ML engineer anymore.
Here is where I insert my plug: I have developed a tool to create rules to solve NLP problems: https://github.com/dataqa/dataqa
- Show HN: Rules-based labelling tool for NLP
-
DataQA: the new Python app to do rules-based text annotation
After working in ML for more than a decade, I became frustrated over time with the lack of tools to create baselines using simple rules and heuristics. It is well known that most business problems out there can achieve decent baselines using only heuristics. This is why I have developed DataQA (https://github.com/dataqa/dataqa), which uses NLP rules to do common NLP annotation tasks, such as multiclass classification or named entity recognition.
transformers
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
The HuggingFace transformers library already has support for a similar method called prompt lookup decoding that uses the existing context to generate an ngram model: https://github.com/huggingface/transformers/issues/27722
I don't think it would be that hard to switch it out for a pretrained ngram model.
-
AI enthusiasm #6 - Finetune any LLM you want💡
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
-
Schedule-Free Learning – A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore – 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
Fail to reproduce the same evaluation metrics score during inference.
I am aware that using mixed precision reduces the stability of weight and there will be little consistency but don't expect it to be this much. I have attached the graph of evaluation metrics. If someone can give me some insight into this issue, that would be great.
What are some alternatives?
diffgram - The AI Datastore for Schemas, BLOBs, and Predictions. Use with your apps or integrate built-in Human Supervision, Data Workflow, and UI Catalog to get the most value out of your AI Data.
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
argilla - Argilla is a collaboration platform for AI engineers and domain experts that require high-quality outputs, full data ownership, and overall efficiency.
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
general
llama - Inference code for Llama models
docarray - Represent, send, store and search multimodal data
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
poutyne - A simplified framework and utilities for PyTorch
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
habitat-sim - A flexible, high-performance 3D simulator for Embodied AI research.
huggingface_hub - The official Python client for the Huggingface Hub.