Awesome-LLM
langchain
Awesome-LLM | langchain | |
---|---|---|
10 | 152 | |
14,654 | 56,526 | |
- | - | |
8.6 | 10.0 | |
8 days ago | 10 months ago | |
Python | ||
Creative Commons Zero v1.0 Universal | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Awesome-LLM
-
XGen-7B, a new 7B foundational model trained on up to 8K length for 1.5T tokens
Here are some high level answers:
"7B" refers to the number of parameters or weights for a model. For a specific model, the versions with more parameters take more compute power to train and perform better.
A foundational model is the part of a ML model that is "pretrained" on a massive data set (and usually is the bulk of the compute cost). This is usually considered the "raw" model after which it is fine-tuned for specific tasks (turned into a chatbot).
"8K length" refers to the Context Window length (in tokens). This is basically an LLM's short term memory - you can think of it as its attention span and what it can generate reasonable output for.
"1.5T tokens" refers to the size of the corpus of the training set.
In general Wikipedia (or I suppose ChatGPT 4/Bing Chat with Web Browsing) is a decent enough place to start reading/asking basic questions. I'd recommend starting here: https://en.wikipedia.org/wiki/Large_language_model and finding the related concepts.
For those going deeper, there are lot of general resources lists like https://github.com/Hannibal046/Awesome-LLM or https://github.com/Mooler0410/LLMsPracticalGuide or one I like, https://sebastianraschka.com/blog/2023/llm-reading-list.html (there are a bajillion of these and you'll find more once you get a grasp on the terms you want to surf for). Almost everything is published on arXiv, and most is fairly readable even as a layman.
For non-ML programmers looking to get up to speed, I feel like Karpathy's Zero to Hero/nanoGPT or Jay Mody's picoGPT https://jaykmody.com/blog/gpt-from-scratch/ are alternative/maybe a better way to understand the basic concepts on a practical level.
- Couple of questions about a.i that can be run locally
- How to dive deeper into LLMs?
- [Hiring] Developer to build AI-powered chatbots with open source LLMs
-
Creating a Wiki for all things Local LLM. What do you want to know?
Check out this repo, there should be some useful things worth noting https://github.com/Hannibal046/Awesome-LLM
- Large Language Model (LLM) Resources
- Curated list for LLMs: papers, training frameworks, tools to deploy, public APIs
-
Performance of GPT-4 vs PaLM 2
First this is a pretty good starting point as a resource for learning about and finding open source models and the overall public history of progress of LLMs.
-
FreedomGPT: AI with no censorship
This seems fishy as fuck. First red flag is a fishy installer instead of any huggingface link for the model. Upon further search I found this: https://desuarchive.org/g/thread/92686632/#92692092 There are posts in its own sub, r slash freedomgpt, raising concerns, and many new accounts with low karma replying to them(I don't think I can link other subs here, check them yourself), 100% some botting/astroturfing going on. Not touching this. Even in the best case scenario that this is legit with no funny business, this is supposed to be based on llama, which is substantially different tiny model(hence why it can be run on your computer at all). This is no Chatgpt equivalent eitherway. I would recommend getting something more reputable from github if you are interested in running LLMs yourself.
-
Ask HN: Foundational Papers in AI
https://github.com/Hannibal046/Awesome-LLM has a curated list of LLM specific resources.
Not the creator, just happened upon it when researching LLMs today.
langchain
-
🗣️🤖 Ask to your Neo4J knowledge base in NLP & get KPIs
Langchain and the implementation of Custom Tools also is a great (and very efficient) way to setup a dedicated Q&A (for example for chat purpose) agent.
- LangChain – Some quick, high level thoughts on improvements/changes
-
Claude 2 Internal API Client and CLI
We're using it via langchain talking to Amazon Bedrock which is hosting Claude 1.x. It's comparable to GPT3.x, not bad. The integration doesn't seem to be fully there though, I think langchain is expecting "Human:" and "AI:", but Claude uses "Assistant:".
https://github.com/hwchase17/langchain/issues/2638
-
Any better alternatives to fine-tuning GPT-3 yet to create a custom chatbot persona based on provided knowledge for others to use?
Depending on how much work you want to put into it, you can get started at HuggingFace with their models and datasets, but you'd need compute power, multiple MLOps, etc. I was introduced to the concept in this video, since Google has their Vertex AI tools on Google Cloud, and there's always LangChain but I'm not sure about anything recent.
-
langchain VS griptape - a user suggested alternative
2 projects | 11 Jul 20232 projects | 9 Jul 2023
-
Vector storage is coming to Meilisearch to empower search through AI
a documentation chatbot proof of concept using GPT3.5 and LangChain
-
ChatPDF: What ChatGPT Can't Do, This Can!
I encourage everyone to pay attention to the Langchain open-source project and leverage it to achieve tasks that ChatGPT cannot handle.
- LangChain Arbitrary Command Execution - CVE-2023-34541
-
Langchain Is Pointless
Yeah I never know where memory goes exactly in langchain, it's not exactly clear all the time. But sure, the main insight I remember is this, take a look at their MULTI_PROMPT_ROUTER_TEMPLATE: https://github.com/hwchase17/langchain/blob/560c4dfc98287da1...
It's a lot of instructions for an LLM, they seem to forget an LLM is an auto-completion machine, and which data it is trained on. Using <<>> for sections is not a normal thing, it's not markdown, which probably the thing read way more often on the internet, instead of open json comments, why not type signatures, instead of so many rules, why not give it examples? It is an autocomplete machine!
They are relying too much on the LLM being smart because they probably only test stuff in GPT-4 and 3.5, but with GPT4All models this prompt was not working at all, so I had to rewrite it, for simple routing, we don't even need json, carying the `next_inputs` here is weird if you don't need it.
So this is my version of it: https://gist.github.com/rogeriochaves/b67676977eebb1936b9b5c...
It's so basic it's dumb, yet it is more powerful, as it does not rely on GPT-4 level intelligence, it's just what I needed
What are some alternatives?
FreedomGPT - This codebase is for a React and Electron-based app that executes the FreedomGPT LLM locally (offline and private) on Mac and Windows using a chat-based interface
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
LLMZoo - ⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡
llama_index - LlamaIndex is a data framework for your LLM applications
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
llama - Inference code for Llama models
dalai - The simplest way to run LLaMA on your local machine
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
langchain - 🦜🔗 Build context-aware reasoning applications
gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]
AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.