The GPT Architecture, on a Napkin

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • minGPT

    A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training

  • Don't know. Karpathy has a very compact implementation of GPT [0] using standard technology (could be even more compact but is reimplementing for example the attention layer for teaching purposes) and while he presumably has no access to how the real model was trained exactly, if there would be more to it I think he would know and point it out.

    [0] https://github.com/karpathy/minGPT/tree/master/mingpt

  • metaseq

    Repo for external large-scale work

  • I work in this field (PhD candidate), and what you say is true for smaller models, but not GPT-3 scale models. Training large scale models involved a lot more, as the OP said. It's not just learning rate schedulers, it's a whole bunch of stuff.

    See this logbook from training the GPT-3 sized OPT model - https://github.com/facebookresearch/metaseq/blob/main/projec...

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • x-transformers

    A simple but complete full-attention transformer with a set of promising experimental features from various papers

  • it is all documented here, in writing and in code https://github.com/lucidrains/x-transformers

    you will want to use rotary embeddings, if you do not need length extrapolation

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts