bert
onnx
bert | onnx | |
---|---|---|
50 | 38 | |
37,077 | 16,894 | |
0.7% | 1.2% | |
0.0 | 9.5 | |
6 days ago | 3 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bert
-
Zero Shot Text Classification Under the hood
In 2019, a new language representation called BERT (Bedirectional Encoder Representation from Transformers) was introduced. The main idea behind this paradigm is to first pre-train a language model using a massive amount of unlabeled data then fine-tune all the parameters using labeled data from the downstream tasks. This allows the model to generalize well to different NLP tasks. Moreover, it has been shown that this language representation model can be used to solve downstream tasks without being explicitly trained on, e.g classify a text without training phase.
-
OpenAI – Application for US trademark "GPT" has failed
task-specific parameters, and is trained on the downstream tasks by simply fine-tuning all pre-trained parameters.
[0] https://arxiv.org/abs/1810.04805
-
Integrate LLM Frameworks
The release of BERT in 2018 kicked off the language model revolution. The Transformers architecture succeeded RNNs and LSTMs to become the architecture of choice. Unbelievable progress was made in a number of areas: summarization, translation, text classification, entity classification and more. 2023 tooks things to another level with the rise of large language models (LLMs). Models with billions of parameters showed an amazing ability to generate coherent dialogue.
-
Embeddings: What they are and why they matter
The general idea is that you have a particular task & dataset, and you optimize these vectors to maximize that task. So the properties of these vectors - what information is retained and what is left out during the 'compression' - are effectively determined by that task.
In general, the core task for the various "LLM tools" involves prediction of a hidden word, trained on very large quantities of real text - thus also mirroring whatever structure (linguistic, syntactic, semantic, factual, social bias, etc) exists there.
If you want to see how the sausage is made and look at the actual algorithms, then the key two approaches to read up on would probably be Mikolov's word2vec (https://arxiv.org/abs/1301.3781) with the CBOW (Continuous Bag of Words) and Continuous Skip-Gram Model, which are based on relatively simple math optimization, and then on the BERT (https://arxiv.org/abs/1810.04805) structure which does a conceptually similar thing but with a large neural network that can learn more from the same data. For both of them, you can either read the original papers or look up blog posts or videos that explain them, different people have different preferences on how readable academic papers are.
- Ernie, China's ChatGPT, Cracks Under Pressure
-
Ask HN: How to Break into AI Engineering
Could you post a link to "the BERT paper"? I've read some, but would be interested reading anything that anyone considered definitive :) Is it this one? "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" :https://arxiv.org/abs/1810.04805
-
How to leverage the state-of-the-art NLP models in Rust
Rust crate rust_bert implementation of the BERT language model (https://arxiv.org/abs/1810.04805 Devlin, Chang, Lee, Toutanova, 2018). The base model is implemented in the bert_model::BertModel struct. Several language model heads have also been implemented, including:
-
Notes on training BERT from scratch on an 8GB consumer GPU
The achievement of training a BERT model to 90% of the GLUE score on a single GPU in ~100 hours is indeed impressive. As for the original BERT pretraining run, the paper [1] mentions that the pretraining took 4 days on 16 TPU chips for the BERT-Base model and 4 days on 64 TPU chips for the BERT-Large model.
Regarding the translation of these techniques to the pretraining phase for a GPT model, it is possible that some of the optimizations and techniques used for BERT could be applied to GPT as well. However, the specific architecture and training objectives of GPT might require different approaches or additional optimizations.
As for the SOPHIA optimizer, it is designed to improve the training of deep learning models by adaptively adjusting the learning rate and momentum. According to the paper [2], SOPHIA has shown promising results in various deep learning tasks. It is possible that the SOPHIA optimizer could help improve the training of BERT and GPT models, but further research and experimentation would be needed to confirm its effectiveness in these specific cases.
[1] https://arxiv.org/abs/1810.04805
-
List of AI-Models
Click to Learn more...
- Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding
onnx
- Onyx, a new programming language powered by WebAssembly
-
From Lab to Live: Implementing Open-Source AI Models for Real-Time Unsupervised Anomaly Detection in Images
Once your model has been trained and validated using Anomalib, the next step is to prepare it for real-time implementation. This is where ONNX (Open Neural Network Exchange) or OpenVINO (Open Visual Inference and Neural network Optimization) comes into play.
-
Object detection with ONNX, Pipeless and a YOLO model
ONNX is an open format from the Linux Foundation to represent machine learning models. It is becoming extensively adopted by the Machine Learning community and is compatible with most of the machine learning frameworks like PyTorch, TensorFlow, etc. Converting a model between any of those formats and ONNX is really simple and can be done in most cases with a single command.
-
38TB of data accidentally exposed by Microsoft AI researchers
ONNX[0], model-as-protosbufs, continuing to gain adoption will hopefully solve this issue.
[0] https://github.com/onnx/onnx
-
Reddit’s LLM text model for Ads Safety
Running inference for large models on CPU is not a new problem and fortunately there has been great development in many different optimization frameworks for speeding up matrix and tensor computations on CPU. We explored multiple optimization frameworks and methods to improve latency, namely TorchScript, BetterTransformer and ONNX.
-
Operationalize TensorFlow Models With ML.NET
ONNX is a format for representing machine learning models in a portable way. Additionally, ONNX models can be easily optimized and thus become smaller and faster.
-
Onnx Runtime: “Cross-Platform Accelerated Machine Learning”
I would say onnx.ai [0] provides more information about ONNX for those who aren’t working with ML/DL.
[0] https://onnx.ai
-
Does ONNX Runtime not support Double/float64?
It's not clear why you thing this sub is appropriate for some third party system with a Python interface. Why don't you try their discussion group: https://github.com/onnx/onnx/discussions
-
Async behaviour in python web frameworks
This kind of indirection through standardisation is pretty common to make compatibility between different kinds of software components easier. Some other good examples are the LSP project from Microsoft and ONNX to represent machine learning models. The first provides a standard so that IDEs don't have to re-invent the weel for every programming language. The latter decouples training frameworks from inference frameworks. Going back to WSGI, you can find a pretty extensive rationale for the WSGI standard here if interested.
- Pickle safety in Python
What are some alternatives?
NLTK - NLTK Source
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
bert-sklearn - a sklearn wrapper for Google's BERT model
stable-diffusion-webui - Stable Diffusion web UI
pysimilar - A python library for computing the similarity between two strings (text) based on cosine similarity
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
PURE - [NAACL 2021] A Frustratingly Easy Approach for Entity and Relation Extraction https://arxiv.org/abs/2010.12812
stable-diffusion - A latent text-to-image diffusion model
NL_Parser_using_Spacy - NLP parser using NER and TDD
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/sd-webui/stable-diffusion-webui]