simpleAI
AlpacaDataCleaned
simpleAI | AlpacaDataCleaned | |
---|---|---|
11 | 14 | |
323 | 1,394 | |
- | - | |
7.3 | 7.6 | |
12 months ago | about 1 year ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
simpleAI
-
[P] I got fed up with LangChain, so I made a simple open-source alternative for building Python AI apps as easy and intuitive as possible.
Not related to my own project SimpleAI despite the name, but looks like we can easily make the two work together, to keep it « simple ». Nice work!
-
Run and create custom ChatGPT-like bots with OpenChat
Using this as an opportunity to mention my own related project, perhaps it can end up on your nice list one day. :)
https://github.com/lhenault/SimpleAI
- [D] OpenAI API vs. Open Source Self hosted for AI Startups
-
StableLM released
You could have a look at a project I’ve been working on, SimpleAI, doing exactly this by replicating the OpenAI endpoints (you can then use their JS client for integration). Adding StableLM should be straightforward, I plan to add it to the examples in the upcoming days once I have a bit of time.
-
[P] LoopGPT: A Modular Auto-GPT Framework
I’ve built SimpleAI with exactly these kinds of use cases in mind. That should allow supporting any model with minimal / no change to your project. Good job and good luck with LoopGPT, that looks nice!
-
Using the API in Node
You could give this a shot: https://github.com/lhenault/simpleAI
-
[D] Would a Tesla M40 provide cheap inference acceleration for self-hosted LLMs?
I don't know if this applies to your use case but this would probably work if you are looking for an llm to help with programming. Haven't really played around with it but this may work for general llm tasks, it doesn't have a web UI though.
-
Alpaca, LLaMa, Vicuna [D]
As per llama.cpp specifically, you can indeed add any model, it's just a matter of doing a bit of glue code and declaring it in your models.toml config. It's quite straightforward thanks to some provided tools for Python (see here for instance). For any other language it's a matter of integrating it through the gRPC interface (which shouldn't be too hard for Llama.cpp if you're comfortable in C++). I'm planning to also add support for REST for model in the backend at some point too.
-
[D] Is there currently anything comparable to the OpenAI API?
Shameless plug but I’ve been recently working on SimpleAI, a project replicating the main endpoints from OpenAI API, allowing you to seamlessly switch from their API to your own one, as it’s compatible with OpenAI client.
-
[P] SimpleAI : A self-hosted alternative to OpenAI API
I wanted to share with you SimpleAI, a self-hosted alternative to OpenAI API.
AlpacaDataCleaned
-
While training LoRA I get 'Failed to read file... JSON parse error'
I tried using the default alpaca_data_cleaned.json training dataset as mentioned here: https://github.com/gururise/AlpacaDataCleaned/blob/main/alpaca_data_cleaned.json. Does anyone know why I could be getting this error? The file must be in correct format since it is the default file they have shown in their example.
-
Why run LLMs locally?
This cleaned alpaca dataset gives a good idea of how data is formatted for the standard alpaca json format. Personally, I'd handle making your own datasets by using gpt4 to format the data into a dataset. You can do it by hand or use a llama model, but I've personally just found using chatgpt to be the most efficient way to get the highest possible output. I'm trying to go for quality over quantity.
-
New llama LoRA trained on WizardLM dataset
I created a dataset merge based on the following very high quality datasets:
- [P] Finetuning a commercially viable open source LLM (Flan-UL2) using Alpaca, Dolly15K and LoRA
-
Stability AI Launches the First of Its StableLM Suite of Language Models
That dataset is licensed under CC BY NC 4.0, which is not open. It also has a bunch of garbage in it; see https://github.com/gururise/AlpacaDataCleaned
- Alpacino-13B
-
GPT4-X-Alpaca 30B 4-bit, by MetaIX based on LoRA by chansung
The alpaca cleaned dataset has integrated the Microsoft GPT-4 dataset and cleaned many of the issues.
-
Alpaca, LLaMa, Vicuna [D]
13b Alpaca Cleaned (trained on the cleaned dataset) is very impressive and works well as an instruct model w/o any censorship.
-
Is there a good place to post datasets for the community?
There's already a community maintained Alpaca with cleaned data. https://github.com/gururise/AlpacaDataCleaned And a huge amount of work has already been done.
-
Dirty data sets and LLaMA/ALPACA...
this might be what you're looking for: https://github.com/gururise/AlpacaDataCleaned
What are some alternatives?
OpenChat - LLMs custom-chatbots console ⚡
StableLM - StableLM: Stability AI Language Models
dalai - The simplest way to run LLaMA on your local machine
safetensors - Simple, safe way to store and distribute tensors
gptcli - ChatGPT in command line with OpenAI API (gpt-3.5-turbo/gpt-4/gpt-4-32k)
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
GPT-4-LLM - Instruction Tuning with GPT-4
loopgpt - Modular Auto-GPT Framework
txtinstruct - 📚 Datasets and models for instruction-tuning
turbopilot - Turbopilot is an open source large-language-model based code completion engine that runs locally on CPU
ue5-llama-lora - A proof-of-concept project that showcases the potential for using small, locally trainable LLMs to create next-generation documentation tools.