LLMs-from-scratch
langroid
LLMs-from-scratch | langroid | |
---|---|---|
11 | 16 | |
19,418 | 1,808 | |
- | 14.3% | |
9.6 | 9.8 | |
about 17 hours ago | 6 days ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLMs-from-scratch
- Evaluating LLMs locally, on a laptop, with Llama 3 and Ollama
-
Ask HN: What are some books/resources where we can learn by building
By happenchance today I learned that Manning recently started working on publishing a X From Scratch series, which currently includes:
* Container Orchestrator: https://www.manning.com/books/build-an-orchestrator-in-go-fr...
* LLM : https://www.manning.com/books/build-a-large-language-model-f...
* Frontend Framework: https://www.manning.com/books/build-a-frontend-web-framework...
- Finetuning an LLM-Based Spam Classifier with LoRA from Scratch
- Finetune a GPT Model for Spam Detection on Your Laptop in Just 5 Minutes
- Insights from Finetuning LLMs for Classification Tasks
-
Ask HN: Textbook Regarding LLMs
https://www.manning.com/books/build-a-large-language-model-f...
- Comparing 5 ways to implement Multihead Attention in PyTorch
- FLaNK Stack 29 Jan 2024
-
Implementing a ChatGPT-like LLM from scratch, step by step
The attention mechanism we implement in this book* is specific to LLMs in terms of the text inputs, but it's fundamentally the same attention mechanism that is used in vision transformers. The only difference is that in LLMs, you turn text into tokens, and convert these tokens into vector embeddings that go into an LLM. In vision transformers, instead of regarding images as tokens, you use an image patch as a token and turn those into vector embeddings (a bit hard to explain without visuals here). In both text or vision context, it's the same attention mechanism, and it both cases it receives vector embeddings.
(*Chapter 3, already submitted last week and should be online in the MEAP soon, in the meantime the code along with the notes is also available here: https://github.com/rasbt/LLMs-from-scratch/blob/main/ch03/01...)
langroid
-
Show HN: Mesop, open-source Python UI framework used at Google
This is very interesting. To build LLM chat-oriented WebApps in python, these days I use Chainlit[1], which I find is much better than Streamlit for this. I've integrated Chainlit into the Langroid[2] Multi-Agent LLM framework via a callback injection class[3] (i.e. hooks to display responses by various entities).
One of the key requirements in a multi-agent chat app is to be able to display steps of sub-tasks nested under parent tasks (to any level of nesting), with the option to fold/collapse sub-steps to only view the parent steps. I was able to get this to work with chainlit, though it was not easy, since their sub-step rendering mental model seemed more aligned to a certain other LLM framework with a partial name overlap with theirs.
That said, I am very curious if Mesop could be a viable alternative, for this type of nested chat implementation, especially if the overall layout can be much more flexible (which it seems like), and more production-ready.
[1] Chainlit https://github.com/Chainlit/chainlit
[2] Langroid: https://github.com/langroid/langroid
[3] Langroid ChainlitAgentCallback class: https://github.com/langroid/langroid/blob/main/langroid/agen...
-
OpenAI: Streaming is now available in the Assistants API
This was indeed true in the beginning, and I don’t know if this has changed. Inserting messages with Assistant role is crucial for many reasons, such as if you want to implement caching, or otherwise edit/compress a previous assistant response for cost or other reason.
At the time I implemented a work-around in Langroid[1]: since you can only insert a “user” role message, prepend the content with ASSISTANT: whenever you want it to be treated as an assistant role. This actually works as expected and I was able to do caching. I explained it in this forum:
https://community.openai.com/t/add-custom-roles-to-messages-...
[1] the Langroid code that adds a message with a given role, using this above “assistant spoofing trick”:
https://github.com/langroid/langroid/blob/main/langroid/agen...
- FLaNK Stack 29 Jan 2024
-
Ollama Python and JavaScript Libraries
Same question here. Ollama is fantastic as it makes it very easy to run models locally, But if you already have a lot of code that processes OpenAI API responses (with retry, streaming, async, caching etc), it would be nice to be able to simply switch the API client to Ollama, without having to have a whole other branch of code that handles Alama API responses. One way to do an easy switch is using the litellm library as a go-between but it’s not ideal (and I also recently found issues with their chat formatting for mistral models).
For an OpenAI compatible API my current favorite method is to spin up models using oobabooga TGW. Your OpenAI API code then works seamlessly by simply switching out the api_base to the ooba endpoint. Regarding chat formatting, even ooba’s Mistral formatting has issues[1] so I am doing my own in Langroid using HuggingFace tokenizer.apply_chat_template [2]
[1] https://github.com/oobabooga/text-generation-webui/issues/53...
[2] https://github.com/langroid/langroid/blob/main/langroid/lang...
Related question - I assume ollama auto detects and applies the right chat formatting template for a model?
-
Pushing ChatGPT's Structured Data Support to Its Limits
we (like simpleaichat from OP) leverage Pydantic to specify the desired structured output, and under the hood Langroid translates it to either the OpenAI function-calling params or (for LLMs that don’t natively support fn-calling), auto-insert appropriate instructions into tje system-prompt. We call this mechanism a ToolMessage:
https://github.com/langroid/langroid/blob/main/langroid/agen...
We take this idea much further — you can define a method in a ChatAgent to “handle” the tool and attach the tool to the agent. For stateless tools you can define a “handle” method in the tool itself and it gets patched into the ChatAgent as the handler for the tool.
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
Many services/platforms are careless/disingenuous when they claim they “train” on your documents, where they actually mean they do RAG.
An under-appreciate benefit of RAG is the ability to have the LLM cite sources for its answers (which are in principle automatically/manually verifiable). You lose this citation ability when you finetune on your documents.
In Langroid (the Multi-Agent framework from ex-CMU/UW-Madison researchers) https://github.com/langroid/langroid
-
Build a search engine, not a vector DB
This resonates with the approach we’ve taken in Langroid (the Multi-Agent framework from ex-CMU/UW-Madison researchers): our DocChatAgent uses a combination of lexical and semantic retrieval, reranking and relevance extraction to improve precision and recall:
https://github.com/langroid/langroid/blob/main/langroid/agen...
-
HuggingChat – ChatGPT alternative with open source models
In the Langroid library (a multi-agent framework from ex-CMU/UW-Madison researchers) we have these and more. For example here’s a script that combines web search and RAG:
https://github.com/langroid/langroid/blob/main/examples/docq...
-
SuperDuperDB - how to use it to talk to your documents locally using llama 7B or Mistral 7B?
Thanks, also found Langdroid: https://github.com/langroid/langroid/blob/main/README.md
- memory in ConversationalRetrievalChain removed
What are some alternatives?
s4 - Structured state space sequence models
simpleaichat - Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
modelfusion - The TypeScript library for building AI applications.
autogen - A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap
vectordb - A minimal Python package for storing and retrieving text using chunking, embeddings, and vector search.
llm - Access large language models from the command-line
outlines - Structured Text Generation
Adala - Adala: Autonomous DAta (Labeling) Agent framework
chidori - A reactive runtime for building durable AI agents
agency - 🕵️♂️ Library designed for developers eager to explore the potential of Large Language Models (LLMs) and other generative AI through a clean, effective, and Go-idiomatic approach.
griptape - Modular Python framework for AI agents and workflows with chain-of-thought reasoning, tools, and memory.
lambdapi - Serverless runtime environment tailored for code produced by LLMs. Automatic API generation from your code, support for multiple programming languages, and integrated file and database storage solutions.