KG_RAG
Empower Large Language Models (LLM) using Knowledge Graph based Retrieval-Augmented Generation (KG-RAG) for knowledge intensive tasks (by BaranziniLab)
LLMs-from-scratch
Implementing a ChatGPT-like LLM from scratch, step by step (by rasbt)
KG_RAG | LLMs-from-scratch | |
---|---|---|
5 | 9 | |
357 | 16,129 | |
36.1% | - | |
9.7 | 9.6 | |
27 days ago | 3 days ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
KG_RAG
Posts with mentions or reviews of KG_RAG.
We have used some of these posts to build our list of alternatives
and similar projects.
- A list of system prompts used for biomedical RAG (KG-RAG) using LLM
- Enable GPT with biomedical knowledge and efficient token usage using KG-RAG
- Supercharge the LLM with a Knowledge Graph Using KG-RAG
- Infusing Domain Knowledge to Large Language Models Using KG-RAG
- Empowering GPT and Llama models with Biomedical knowledge using KG-RAG framework
LLMs-from-scratch
Posts with mentions or reviews of LLMs-from-scratch.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-03-23.
- Finetune a GPT Model for Spam Detection on Your Laptop in Just 5 Minutes
- Insights from Finetuning LLMs for Classification Tasks
-
Ask HN: Textbook Regarding LLMs
https://www.manning.com/books/build-a-large-language-model-f...
- Comparing 5 ways to implement Multihead Attention in PyTorch
- FLaNK Stack 29 Jan 2024
-
Implementing a ChatGPT-like LLM from scratch, step by step
The attention mechanism we implement in this book* is specific to LLMs in terms of the text inputs, but it's fundamentally the same attention mechanism that is used in vision transformers. The only difference is that in LLMs, you turn text into tokens, and convert these tokens into vector embeddings that go into an LLM. In vision transformers, instead of regarding images as tokens, you use an image patch as a token and turn those into vector embeddings (a bit hard to explain without visuals here). In both text or vision context, it's the same attention mechanism, and it both cases it receives vector embeddings.
(*Chapter 3, already submitted last week and should be online in the MEAP soon, in the meantime the code along with the notes is also available here: https://github.com/rasbt/LLMs-from-scratch/blob/main/ch03/01...)
What are some alternatives?
When comparing KG_RAG and LLMs-from-scratch you can also consider the following projects:
LangChain-SynData-RAG-Eval - LangChain, Llama2-Chat, and zero- and few-shot prompting are used to generate synthetic datasets for IR and RAG system evaluation
s4 - Structured state space sequence models