transformers
Our great sponsors
kaggle-environments | transformers | |
---|---|---|
55 | 175 | |
273 | 125,021 | |
1.5% | 3.1% | |
6.6 | 10.0 | |
about 2 months ago | 4 days ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kaggle-environments
- Data Science Roadmap with Free Study Material
-
Help needed! My first hackathon
If you are interested in Data Science, you may want to look at Kaggle competitions. https://www.kaggle.com/competitions
- What's a statistical / research methodology, that's not usually taught in grad programs, that you think more IO's should be aware about?
-
Freaking out about how I’m inexperienced to land an internship and eventually a job
Secondly, if you feel like you do not have enough skills or a lack of practice answering problem statements, there are a lot of good websites where you can find interesting projects. I would recommend starting participating in some Kaggle competitions or download some free Google datasets and start playing with them.
-
Capitalism provides half-assed solutions to extinction-level problems caused by capitalism
For reference: Kaggle is a Google product. You can see the list of current competitions here.
- Where can neural networks take me? - Semi-existential crisis
-
What Can I Do With My Time as a Substitute for Strategy Computer Games?
You could try Kaggle competitions, or participating in forecasting markets (as you stated) is another option. You don't need any specific skill set to be a forecaster, the rules of the bet are stipulated and from there it's just based on your ability to predict the outcome. You could also try your hand at investing in the stock market, or try and make money betting on sports games. If you're very good at this stuff I'm sure you can make a lot of money doing it. The thing to keep in mind is that generally video games are much much easier than real life
-
What is the best advanced professional certification for Data Science/ML/DL/MLOps?
As to the specifics of your projects, that's up to you. Try browsing Kaggle; check out some of the work we have on The Pudding; check out some journalism examples to see what you can try to build on or improve.
- Suggestions for projects on kaggle for cv?
-
Hi! Im doing research on AI innovation. Does anybody know any specific platform where I can learn/understand and get case studies or on-going projects that companies are implementing? Thanks for your help!
You might want to look at kaggle competitions.
transformers
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
The HuggingFace transformers library already has support for a similar method called prompt lookup decoding that uses the existing context to generate an ngram model: https://github.com/huggingface/transformers/issues/27722
I don't think it would be that hard to switch it out for a pretrained ngram model.
-
AI enthusiasm #6 - Finetune any LLM you want💡
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
-
Schedule-Free Learning – A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore – 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
Fail to reproduce the same evaluation metrics score during inference.
I am aware that using mixed precision reduces the stability of weight and there will be little consistency but don't expect it to be this much. I have attached the graph of evaluation metrics. If someone can give me some insight into this issue, that would be great.
What are some alternatives?
CKAN - CKAN is an open-source DMS (data management system) for powering data hubs and data portals. CKAN makes it easy to publish, share and use data. It powers catalog.data.gov, open.canada.ca/data, data.humdata.org among many other sites.
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
stable-baselines - A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
llama - Inference code for Llama models
docarray - Represent, send, store and search multimodal data
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
datasci-ctf - A capture-the-flag exercise based on data analysis challenges
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
dremio-oss - Dremio - the missing link in modern data
huggingface_hub - The official Python client for the Huggingface Hub.