attention-is-all-you-need-pytorch

A PyTorch implementation of the Transformer model in "Attention is All You Need". (by jadore801120)

Attention-is-all-you-need-pytorch Alternatives

Similar projects and alternatives to attention-is-all-you-need-pytorch

attention-is-all-you-need-pytorch
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better attention-is-all-you-need-pytorch alternative or higher similarity.

attention-is-all-you-need-pytorch discussion

Log in or Post with

attention-is-all-you-need-pytorch reviews and mentions

Posts with mentions or reviews of attention-is-all-you-need-pytorch. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-10.
  • ElevenLabs Launches Voice Translation Tool to Break Down Language Barriers
    2 projects | news.ycombinator.com | 10 Oct 2023
    The transformer model was invented to attend to context over the entire sequence length. Look at how the original authors used the Transformer for NMT in the original Vaswani et al publication. https://github.com/jadore801120/attention-is-all-you-need-py...
  • Question: LLMs
    1 project | /r/learnmachinelearning | 6 Jul 2023
    I did implement an "LLM" proof of concept from scratch in a course for my masters, pretty much doing a small implementation of a transformer from the Attention is all you Need paper (plus other resources). It was useless, but was a great experience to understand how it works. There are a few implementation like this out there, like this one: https://github.com/jadore801120/attention-is-all-you-need-pytorch (first google result). I think it is a fun exercise (the amount of fun depends on how much of a masochist you are :) ).
  • Lack of activation in transformer feedforward layer?
    2 projects | /r/learnmachinelearning | 20 May 2021
    I'm curious as to why the second matrix multiplication is not followed by an activation unlike the first one. Is there any particular reason why a non-linearity would be trivial or even avoided in the second operation? For reference, variations of this can be witnessed in a number of different implementations, including BERT-pytorch and attention-is-all-you-need-pytorch.
  • A note from our sponsor - Scout Monitoring
    www.scoutapm.com | 17 Jul 2024
    Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today. Learn more →

Stats

Basic attention-is-all-you-need-pytorch repo stats
3
8,620
0.0
3 months ago

Sponsored
Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com

Did you konow that Python is
the 1st most popular programming language
based on number of metions?