LLMs-from-scratch
machine-learning-book
LLMs-from-scratch | machine-learning-book | |
---|---|---|
11 | 2 | |
18,902 | 3,014 | |
- | - | |
9.6 | 7.1 | |
7 days ago | 23 days ago | |
Jupyter Notebook | Jupyter Notebook | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLMs-from-scratch
- Evaluating LLMs locally, on a laptop, with Llama 3 and Ollama
-
Ask HN: What are some books/resources where we can learn by building
By happenchance today I learned that Manning recently started working on publishing a X From Scratch series, which currently includes:
* Container Orchestrator: https://www.manning.com/books/build-an-orchestrator-in-go-fr...
* LLM : https://www.manning.com/books/build-a-large-language-model-f...
* Frontend Framework: https://www.manning.com/books/build-a-frontend-web-framework...
- Finetuning an LLM-Based Spam Classifier with LoRA from Scratch
- Finetune a GPT Model for Spam Detection on Your Laptop in Just 5 Minutes
- Insights from Finetuning LLMs for Classification Tasks
-
Ask HN: Textbook Regarding LLMs
https://www.manning.com/books/build-a-large-language-model-f...
- Comparing 5 ways to implement Multihead Attention in PyTorch
- FLaNK Stack 29 Jan 2024
-
Implementing a ChatGPT-like LLM from scratch, step by step
The attention mechanism we implement in this book* is specific to LLMs in terms of the text inputs, but it's fundamentally the same attention mechanism that is used in vision transformers. The only difference is that in LLMs, you turn text into tokens, and convert these tokens into vector embeddings that go into an LLM. In vision transformers, instead of regarding images as tokens, you use an image patch as a token and turn those into vector embeddings (a bit hard to explain without visuals here). In both text or vision context, it's the same attention mechanism, and it both cases it receives vector embeddings.
(*Chapter 3, already submitted last week and should be online in the MEAP soon, in the meantime the code along with the notes is also available here: https://github.com/rasbt/LLMs-from-scratch/blob/main/ch03/01...)
machine-learning-book
-
Implementing a ChatGPT-like LLM from scratch, step by step
Sorry, in that case I would rather recommend a dedicated RL book. The RL part in LLMs will be very specific to LLMs, and I will only cover what's absolutely relevant in terms of background info. I do have a longish intro chapter on RL in my other general ML/DL book (https://github.com/rasbt/machine-learning-book/tree/main/ch1...) but like others said, I would recommend a dedicated RL book in your case.
-
"Machine Learning with PyTorch and Scikit-Learn" book
All the code examples are available here: https://github.com/rasbt/machine-learning-book
What are some alternatives?
s4 - Structured state space sequence models
skorch - A scikit-learn compatible neural network library that wraps PyTorch
python-machine-learning-book-3rd-edition - The "Python Machine Learning (3rd edition)" book code repository
ML-Workspace - 🛠 All-in-one web-based IDE specialized for machine learning and data science.
embedding-encoder - Scikit-Learn compatible transformer that turns categorical variables into dense entity embeddings.
gdrl - Grokking Deep Reinforcement Learning
hyperlearn - 2-2000x faster ML algos, 50% less memory usage, works on all hardware - new and old.
nn - 🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠