AD-Rosetta-Stone
picoGPT
AD-Rosetta-Stone | picoGPT | |
---|---|---|
2 | 7 | |
26 | 3,081 | |
- | - | |
10.0 | 1.9 | |
almost 6 years ago | about 1 year ago | |
Scala | Python | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AD-Rosetta-Stone
-
Understanding Automatic Differentiation in 30 lines of Python
[1] https://github.com/qobi/AD-Rosetta-Stone/
-
Autodidax: Jax Core from Scratch (In Python)
I find the solutions from https://github.com/qobi/AD-Rosetta-Stone/ to be very helpful, particularly for representing forward and backward mode automatic differentiation using a functional approach.
I used this code as inspiration for a functional-only (without references/pointers) in Mercury: https://github.com/mclements/mercury-ad
picoGPT
-
Understanding Automatic Differentiation in 30 lines of Python
In that case, you might also enjoy https://jaykmody.com/blog/gpt-from-scratch/
(here's the raw code: https://github.com/jaymody/picoGPT/blob/main/gpt2.py)
-
Transformers from Scratch
I wrote a minimal implementation in NumPy here (the forward pass code is only 40 lines): https://github.com/jaymody/picoGPT
Although this is for a decoder-only transformer (aka GPT) and doesnt include the encoder part.
- FLaNK Stack Weekly 3 April 2023
-
GPT-4 Says an Open-Source Chatbot Vicuna Reaches 90% ChatGPT Quality
Take a look at https://github.com/jaymody/picoGPT/blob/a750c145ba4d09d57648...
Yes, this is GPT-2 not 4 and it‘s not the Chat, only the model and it‘s basically only the inference part, not the training loop and it‘s somewhat simplified.
Still, take a good look.
That‘s essentially what it is and a single sheet of paper.
There is nothing specifically about language in „language model“, we just call it that. Better to call it just LLM.
Nobody knows exactly what it learns, although there would be ways to poke around given some research programs. But it seems like the interest in that is limited currently, everyone is busy with improving it or with applications.
Perhaps the answer is that we overestimated what a mind is. It‘s like we used to ask what life is and it turned out that there is nothing special about life, not even the DNA is controlling anything. It‘s merely a chemical process, even though a complex process.
-
u/functor7 explains why AIs like ChatGPT do not "understand" their subject
(The hardest part was just designing a math function that has the capability of getting good at this game, but when all is said and done, it need not be a whole lot of code).
- PicoGPT: An unnecessarily tiny implementation of GPT-2 in NumPy
- picoGPT: An unnecessarily tiny implementation of GPT-2 in NumPy
What are some alternatives?
mercury-ad - Mercury library for automatic differentiation
gpt4all - gpt4all: run open-source LLMs anywhere
autograd - Efficiently computes derivatives of numpy code.
glances - Glances an Eye on your system. A top/htop alternative for GNU/Linux, BSD, Mac OS and Windows operating systems.
autodidact - A pedagogical implementation of Autograd
taskwarrior - Taskwarrior - Command line Task Management
Tensor-Puzzles - Solve puzzles. Improve your pytorch.
ctop - Top-like interface for container metrics
owl - Owl - OCaml Scientific Computing @ https://ocaml.xyz
exiftool - ExifTool meta information reader/writer
ddgr - :duck: DuckDuckGo from the terminal