open_llama
gpt-json
Our great sponsors
open_llama | gpt-json | |
---|---|---|
52 | 7 | |
7,193 | 726 | |
1.3% | - | |
5.3 | 6.8 | |
10 months ago | 15 days ago | |
Python | ||
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
open_llama
-
How Open is Generative AI? Part 2
The RedPajama dataset was adapted by the OpenLLaMA project at UC Berkeley, creating an open-source LLaMA equivalent without Metaβs restrictions. The model's later version also included data from Falcon and StarCoder. This highlights the importance of open-source models and datasets, enabling free repurposing and innovation.
-
GPT-4 API general availability
OpenLLaMA is though. https://github.com/openlm-research/open_llama
All of these are surmountable problems.
We can beat OpenAI.
We can drain their moat.
-
Recommend me a computer for local a.i for 500 $
#1: π Open-source Reproduction of Meta AIβs LLaMA OpenLLaMA-13B released. (trained for 1T tokens) | 0 comments #2: π #1 on HuggingFace.co's Leaderboard Model Falcon 40B is now Free (Apache 2.0 License) | 0 comments #3: π Have you seen this repo? "running LLMs on consumer-grade hardware. compatible models: llama.cpp, alpaca.cpp, gpt4all.cpp, rwkv.cpp, whisper.cpp, vicuna, koala, gpt4all-j, cerebras and many others!" | 0 comments
-
Who is openllama from?
Trained OpenLLaMA models are from the OpenLM Research team in collaboration with Stability AI: https://github.com/openlm-research/open_llama
-
Personal GPT: A tiny AI Chatbot that runs fully offline on your iPhone
I can't use Llama or any model from the Llama family, due to license restrictions. Although now there's also the OpenLlama family of models, which have the same architecture but were trained on an open dataset (RedPajama, the same dataset the base model in my app was trained on). I'd love to pursue the direction of extended context lengths for on-device LLMs. Likely in a month or so, when I've implemented all the product feature that I currently have on my backlog.
-
XGen-7B, a new 7B foundational model trained on up to 8K length for 1.5T tokens
https://github.com/openlm-research/open_llama#update-0615202...).
XGen-7B is probably the superior 7B model, it's trained on more tokens and a longer default sequence length (although both presumably can adopt SuperHOT (Position Interpolation) to extend context), but larger models still probably perform better on an absolute basis.
-
MosaicML Agrees to Join Databricks to Power Generative AI for All
Compare it to openllama. It github doesn't have a single script on how to do anything.
-
Databricks Strikes $1.3B Deal for Generative AI Startup MosaicML
OpenLLaMA models up to 13B parameters have now been trained on 1T tokens:
https://github.com/openlm-research/open_llama
-
Containerized AI before Apocalypse π³π€
The deployed LLM binary, orca mini, has 3 billion parameters. Orca mini is based on the OpenLLaMA project.
-
AI β weekly megathread!
OpenLM Research released its 1T token version of OpenLLaMA 13B - the permissively licensed open source reproduction of Meta AI's LLaMA large language model. [Details].
gpt-json
-
Structured Output from LLMs (Without Reprompting!)
I did a POC project with it recently. The guidance on gpt-3.5-turbo and gpt-4 models isn't as functional as plain gpt-3. I found I had better results using https://github.com/piercefreeman/gpt-json and it doesn't require multiple calls to the API. Not as feature filled, but it may meet your needs
-
This week's top indie A.I projects, launches and resources
Gpt-json: Structured and typehinted GPT responses in Python
- GitHub - piercefreeman/gpt-json: Structured and typehinted GPT responses in Python
- Show HN: GPT-JSON β Structured and typehinted GPT responses in Python
- GPT-JSON: Structured and typehinted GPT responses in Python
What are some alternatives?
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
zod-chatgpt
llama.cpp - LLM inference in C/C++
jsonformer - A Bulletproof Way to Generate Structured JSON from Language Models
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
emdash - ππ§ββοΈ Wisdom indexer β use AI to organize text snippets so you can actually remember & learn from what you read
gpt4all - gpt4all: run open-source LLMs anywhere
evadb - Database system for AI-powered apps
gorilla - Gorilla: An API store for LLMs
struct-gpt - get structured output from LLM's
ggml - Tensor library for machine learning
gpt-logic - Translate the natural language generated by OpenAI's GPT models or any other large language models into JavaScript data types like booleans and objects.