attention-is-all-you-need-pytorch
BERT-pytorch
Our great sponsors
attention-is-all-you-need-pytorch | BERT-pytorch | |
---|---|---|
3 | 1 | |
8,432 | 5,988 | |
- | - | |
0.0 | 0.0 | |
8 days ago | 7 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
attention-is-all-you-need-pytorch
-
ElevenLabs Launches Voice Translation Tool to Break Down Language Barriers
The transformer model was invented to attend to context over the entire sequence length. Look at how the original authors used the Transformer for NMT in the original Vaswani et al publication. https://github.com/jadore801120/attention-is-all-you-need-py...
-
Question: LLMs
I did implement an "LLM" proof of concept from scratch in a course for my masters, pretty much doing a small implementation of a transformer from the Attention is all you Need paper (plus other resources). It was useless, but was a great experience to understand how it works. There are a few implementation like this out there, like this one: https://github.com/jadore801120/attention-is-all-you-need-pytorch (first google result). I think it is a fun exercise (the amount of fun depends on how much of a masochist you are :) ).
-
Lack of activation in transformer feedforward layer?
I'm curious as to why the second matrix multiplication is not followed by an activation unlike the first one. Is there any particular reason why a non-linearity would be trivial or even avoided in the second operation? For reference, variations of this can be witnessed in a number of different implementations, including BERT-pytorch and attention-is-all-you-need-pytorch.
BERT-pytorch
-
Lack of activation in transformer feedforward layer?
I'm curious as to why the second matrix multiplication is not followed by an activation unlike the first one. Is there any particular reason why a non-linearity would be trivial or even avoided in the second operation? For reference, variations of this can be witnessed in a number of different implementations, including BERT-pytorch and attention-is-all-you-need-pytorch.
What are some alternatives?
LFattNet - Attention-based View Selection Networks for Light-field Disparity Estimation
haystack - :mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
long-range-arena - Long Range Arena for Benchmarking Efficient Transformers
bertviz - BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
OpenPrompt - An Open-Source Framework for Prompt-Learning.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
allennlp - An open-source NLP research library, built on PyTorch.
scibert - A BERT model for scientific text.
cuad - CUAD (NeurIPS 2021)