LLMs-from-scratch
FLiPStackWeekly
LLMs-from-scratch | FLiPStackWeekly | |
---|---|---|
11 | 86 | |
19,418 | 14 | |
- | - | |
9.6 | 9.9 | |
about 17 hours ago | 6 days ago | |
Jupyter Notebook | ||
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLMs-from-scratch
- Evaluating LLMs locally, on a laptop, with Llama 3 and Ollama
-
Ask HN: What are some books/resources where we can learn by building
By happenchance today I learned that Manning recently started working on publishing a X From Scratch series, which currently includes:
* Container Orchestrator: https://www.manning.com/books/build-an-orchestrator-in-go-fr...
* LLM : https://www.manning.com/books/build-a-large-language-model-f...
* Frontend Framework: https://www.manning.com/books/build-a-frontend-web-framework...
- Finetuning an LLM-Based Spam Classifier with LoRA from Scratch
- Finetune a GPT Model for Spam Detection on Your Laptop in Just 5 Minutes
- Insights from Finetuning LLMs for Classification Tasks
-
Ask HN: Textbook Regarding LLMs
https://www.manning.com/books/build-a-large-language-model-f...
- Comparing 5 ways to implement Multihead Attention in PyTorch
- FLaNK Stack 29 Jan 2024
-
Implementing a ChatGPT-like LLM from scratch, step by step
The attention mechanism we implement in this book* is specific to LLMs in terms of the text inputs, but it's fundamentally the same attention mechanism that is used in vision transformers. The only difference is that in LLMs, you turn text into tokens, and convert these tokens into vector embeddings that go into an LLM. In vision transformers, instead of regarding images as tokens, you use an image patch as a token and turn those into vector embeddings (a bit hard to explain without visuals here). In both text or vision context, it's the same attention mechanism, and it both cases it receives vector embeddings.
(*Chapter 3, already submitted last week and should be online in the MEAP soon, in the meantime the code along with the notes is also available here: https://github.com/rasbt/LLMs-from-scratch/blob/main/ch03/01...)
FLiPStackWeekly
What are some alternatives?
s4 - Structured state space sequence models
gorilla-cli - LLMs for your CLI
awk-raycaster - Pseudo-3D shooter written completely in gawk using raycasting technique
litellm - Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
modelscope - ModelScope: bring the notion of Model-as-a-Service to life.
pulsar-thermal-pinot - Apache Pulsar - Apache Pinot - Thermal Sensor Data
FLiP-PulsarSummit2022Asia - FLiP-PulsarSummit2022Asia: Pulsar Summit Asia 2022
sherlock - Hunt down social media accounts by username across social networks
create-nifi-pulsar-flink-apps - How to create a real-time scalable streaming app using Apache NiFi, Apache Pulsar and Apache Flink SQL
OpenVoice - Instant voice cloning by MyShell.
CML_AMP_LLM_Chatbot_Augmented_with_Enterprise_Data
VToonify - [SIGGRAPH Asia 2022] VToonify: Controllable High-Resolution Portrait Video Style Transfer