open-llms
open_llama
open-llms | open_llama | |
---|---|---|
22 | 52 | |
10,168 | 7,201 | |
- | 0.7% | |
7.7 | 5.3 | |
about 1 month ago | 10 months ago | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
open-llms
-
7 SAAS ideas π‘ you can steal
Everyone knows about ChatGPT by now, but did you know there are other models like "Mistral" or "Falcon" - you can view a full list of open-source models here or on huggingface.
- eugeneyan/open-llms
-
GPT-4 API general availability
This is the most well-maintained list of commercially usable open LLMs: https://github.com/eugeneyan/open-llms
MPT, OpenLLaMA, and Falcon are probably the most generally useful.
For code, Replit Code (specifically replit-code-instruct-glaive) and StarCoder (WizardCoder-15B) are the current top open models and both can be used commercially.
-
Local LLMs: After Novelty Wanes
There's also MPT, which has a 7B, and Falcon, with a 7B and 40B although they have not had the inference tuning in community projects that the llamas have had. This is a good repo for reviewing what's available atm: https://github.com/eugeneyan/open-llms
- How to keep track of all the LLMs out there?
-
How do I learn AI/Machine Learning?
If I was going to do the same I would at least build off of something, check out https://github.com/eugeneyan/open-llms, you should at least have a decent understanding of artificial neural networks (ANNs) and this link is pretty good on the basic concepts you need inc classification and learning types, good luck friend.
- LLM and privacy
- Local LLM to learn, explore and use for commercial purpose
-
Best instruct model recommendations to use with T4?
This list might help: https://github.com/eugeneyan/open-llms
- [D] What is the best open source LLM so far?
open_llama
-
How Open is Generative AI? Part 2
The RedPajama dataset was adapted by the OpenLLaMA project at UC Berkeley, creating an open-source LLaMA equivalent without Metaβs restrictions. The model's later version also included data from Falcon and StarCoder. This highlights the importance of open-source models and datasets, enabling free repurposing and innovation.
-
GPT-4 API general availability
OpenLLaMA is though. https://github.com/openlm-research/open_llama
All of these are surmountable problems.
We can beat OpenAI.
We can drain their moat.
-
Recommend me a computer for local a.i for 500 $
#1: π Open-source Reproduction of Meta AIβs LLaMA OpenLLaMA-13B released. (trained for 1T tokens) | 0 comments #2: π #1 on HuggingFace.co's Leaderboard Model Falcon 40B is now Free (Apache 2.0 License) | 0 comments #3: π Have you seen this repo? "running LLMs on consumer-grade hardware. compatible models: llama.cpp, alpaca.cpp, gpt4all.cpp, rwkv.cpp, whisper.cpp, vicuna, koala, gpt4all-j, cerebras and many others!" | 0 comments
-
Who is openllama from?
Trained OpenLLaMA models are from the OpenLM Research team in collaboration with Stability AI: https://github.com/openlm-research/open_llama
-
Personal GPT: A tiny AI Chatbot that runs fully offline on your iPhone
I can't use Llama or any model from the Llama family, due to license restrictions. Although now there's also the OpenLlama family of models, which have the same architecture but were trained on an open dataset (RedPajama, the same dataset the base model in my app was trained on). I'd love to pursue the direction of extended context lengths for on-device LLMs. Likely in a month or so, when I've implemented all the product feature that I currently have on my backlog.
-
XGen-7B, a new 7B foundational model trained on up to 8K length for 1.5T tokens
https://github.com/openlm-research/open_llama#update-0615202...).
XGen-7B is probably the superior 7B model, it's trained on more tokens and a longer default sequence length (although both presumably can adopt SuperHOT (Position Interpolation) to extend context), but larger models still probably perform better on an absolute basis.
-
MosaicML Agrees to Join Databricks to Power Generative AI for All
Compare it to openllama. It github doesn't have a single script on how to do anything.
-
Databricks Strikes $1.3B Deal for Generative AI Startup MosaicML
OpenLLaMA models up to 13B parameters have now been trained on 1T tokens:
https://github.com/openlm-research/open_llama
-
Containerized AI before Apocalypse π³π€
The deployed LLM binary, orca mini, has 3 billion parameters. Orca mini is based on the OpenLLaMA project.
-
AI β weekly megathread!
OpenLM Research released its 1T token version of OpenLLaMA 13B - the permissively licensed open source reproduction of Meta AI's LLaMA large language model. [Details].
What are some alternatives?
SillyTavern - LLM Frontend for Power Users.
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
SillyTavern-Extras - Extensions API for SillyTavern.
llama.cpp - LLM inference in C/C++
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
llm-jeopardy - Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts
gpt4all - gpt4all: run open-source LLMs anywhere
azure-search-openai-demo - A sample app for the Retrieval-Augmented Generation pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences.
gorilla - Gorilla: An API store for LLMs
panml - PanML is a high level generative AI/ML development and analysis library designed for ease of use and fast experimentation.
ggml - Tensor library for machine learning