bert
transformers
Our great sponsors
bert | transformers | |
---|---|---|
49 | 175 | |
36,992 | 125,021 | |
1.3% | 2.7% | |
0.0 | 10.0 | |
17 days ago | about 6 hours ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bert
-
OpenAI – Application for US trademark "GPT" has failed
task-specific parameters, and is trained on the downstream tasks by simply fine-tuning all pre-trained parameters.
[0] https://arxiv.org/abs/1810.04805
-
Integrate LLM Frameworks
The release of BERT in 2018 kicked off the language model revolution. The Transformers architecture succeeded RNNs and LSTMs to become the architecture of choice. Unbelievable progress was made in a number of areas: summarization, translation, text classification, entity classification and more. 2023 tooks things to another level with the rise of large language models (LLMs). Models with billions of parameters showed an amazing ability to generate coherent dialogue.
-
Embeddings: What they are and why they matter
The general idea is that you have a particular task & dataset, and you optimize these vectors to maximize that task. So the properties of these vectors - what information is retained and what is left out during the 'compression' - are effectively determined by that task.
In general, the core task for the various "LLM tools" involves prediction of a hidden word, trained on very large quantities of real text - thus also mirroring whatever structure (linguistic, syntactic, semantic, factual, social bias, etc) exists there.
If you want to see how the sausage is made and look at the actual algorithms, then the key two approaches to read up on would probably be Mikolov's word2vec (https://arxiv.org/abs/1301.3781) with the CBOW (Continuous Bag of Words) and Continuous Skip-Gram Model, which are based on relatively simple math optimization, and then on the BERT (https://arxiv.org/abs/1810.04805) structure which does a conceptually similar thing but with a large neural network that can learn more from the same data. For both of them, you can either read the original papers or look up blog posts or videos that explain them, different people have different preferences on how readable academic papers are.
- Ernie, China's ChatGPT, Cracks Under Pressure
-
Ask HN: How to Break into AI Engineering
Could you post a link to "the BERT paper"? I've read some, but would be interested reading anything that anyone considered definitive :) Is it this one? "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" :https://arxiv.org/abs/1810.04805
-
How to leverage the state-of-the-art NLP models in Rust
Rust crate rust_bert implementation of the BERT language model (https://arxiv.org/abs/1810.04805 Devlin, Chang, Lee, Toutanova, 2018). The base model is implemented in the bert_model::BertModel struct. Several language model heads have also been implemented, including:
-
Notes on training BERT from scratch on an 8GB consumer GPU
The achievement of training a BERT model to 90% of the GLUE score on a single GPU in ~100 hours is indeed impressive. As for the original BERT pretraining run, the paper [1] mentions that the pretraining took 4 days on 16 TPU chips for the BERT-Base model and 4 days on 64 TPU chips for the BERT-Large model.
Regarding the translation of these techniques to the pretraining phase for a GPT model, it is possible that some of the optimizations and techniques used for BERT could be applied to GPT as well. However, the specific architecture and training objectives of GPT might require different approaches or additional optimizations.
As for the SOPHIA optimizer, it is designed to improve the training of deep learning models by adaptively adjusting the learning rate and momentum. According to the paper [2], SOPHIA has shown promising results in various deep learning tasks. It is possible that the SOPHIA optimizer could help improve the training of BERT and GPT models, but further research and experimentation would be needed to confirm its effectiveness in these specific cases.
[1] https://arxiv.org/abs/1810.04805
-
List of AI-Models
Click to Learn more...
- Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding
- Google internally developed chatbots like ChatGPT years ago
transformers
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
The HuggingFace transformers library already has support for a similar method called prompt lookup decoding that uses the existing context to generate an ngram model: https://github.com/huggingface/transformers/issues/27722
I don't think it would be that hard to switch it out for a pretrained ngram model.
-
AI enthusiasm #6 - Finetune any LLM you want💡
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
-
Schedule-Free Learning – A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore – 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
Fail to reproduce the same evaluation metrics score during inference.
I am aware that using mixed precision reduces the stability of weight and there will be little consistency but don't expect it to be this much. I have attached the graph of evaluation metrics. If someone can give me some insight into this issue, that would be great.
What are some alternatives?
NLTK - NLTK Source
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
bert-sklearn - a sklearn wrapper for Google's BERT model
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
pysimilar - A python library for computing the similarity between two strings (text) based on cosine similarity
llama - Inference code for Llama models
PURE - [NAACL 2021] A Frustratingly Easy Approach for Entity and Relation Extraction https://arxiv.org/abs/2010.12812
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
NL_Parser_using_Spacy - NLP parser using NER and TDD
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
cakechat - CakeChat: Emotional Generative Dialog System
huggingface_hub - The official Python client for the Huggingface Hub.