simpleAI
turbopilot
simpleAI | turbopilot | |
---|---|---|
11 | 15 | |
323 | 3,839 | |
- | - | |
7.3 | 10.0 | |
12 months ago | 8 months ago | |
Python | C++ | |
MIT License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
simpleAI
-
[P] I got fed up with LangChain, so I made a simple open-source alternative for building Python AI apps as easy and intuitive as possible.
Not related to my own project SimpleAI despite the name, but looks like we can easily make the two work together, to keep it « simple ». Nice work!
-
Run and create custom ChatGPT-like bots with OpenChat
Using this as an opportunity to mention my own related project, perhaps it can end up on your nice list one day. :)
https://github.com/lhenault/SimpleAI
- [D] OpenAI API vs. Open Source Self hosted for AI Startups
-
StableLM released
You could have a look at a project I’ve been working on, SimpleAI, doing exactly this by replicating the OpenAI endpoints (you can then use their JS client for integration). Adding StableLM should be straightforward, I plan to add it to the examples in the upcoming days once I have a bit of time.
-
[P] LoopGPT: A Modular Auto-GPT Framework
I’ve built SimpleAI with exactly these kinds of use cases in mind. That should allow supporting any model with minimal / no change to your project. Good job and good luck with LoopGPT, that looks nice!
-
Using the API in Node
You could give this a shot: https://github.com/lhenault/simpleAI
-
[D] Would a Tesla M40 provide cheap inference acceleration for self-hosted LLMs?
I don't know if this applies to your use case but this would probably work if you are looking for an llm to help with programming. Haven't really played around with it but this may work for general llm tasks, it doesn't have a web UI though.
-
Alpaca, LLaMa, Vicuna [D]
As per llama.cpp specifically, you can indeed add any model, it's just a matter of doing a bit of glue code and declaring it in your models.toml config. It's quite straightforward thanks to some provided tools for Python (see here for instance). For any other language it's a matter of integrating it through the gRPC interface (which shouldn't be too hard for Llama.cpp if you're comfortable in C++). I'm planning to also add support for REST for model in the backend at some point too.
-
[D] Is there currently anything comparable to the OpenAI API?
Shameless plug but I’ve been recently working on SimpleAI, a project replicating the main endpoints from OpenAI API, allowing you to seamlessly switch from their API to your own one, as it’s compatible with OpenAI client.
-
[P] SimpleAI : A self-hosted alternative to OpenAI API
I wanted to share with you SimpleAI, a self-hosted alternative to OpenAI API.
turbopilot
- New version of Turbopilot released!
-
GGML for Falcoder7B, SantaCoder 1B, TinyStarCoder 160M
fyi https://github.com/ravenscroftj/turbopilot
-
April 2023
TurboPilot: self-hosted copilot clone which uses the library behind llama.cpp to run the 6 Billion Parameter Salesforce Codegen model in 4GiB of RAM. (https://github.com/ravenscroftj/turbopilot)
-
Which Models Best for Programming?
This repo has a potential
-
[D] What Repos/Tools Should We Pay Attention To?
Right now https://github.com/ggerganov/llama.cpp is the dominant back-end for querying models, but forks and alternatives like https://github.com/ravenscroftj/turbopilot keep popping up. Increasingly, models submitted to huggingface explicitly note in their READMEs that the model is not compatible with llama.cpp, and that a different back-end must be used.
-
newbie seeking impressive llama models, am i missing something?
There's turbopilot. I haven't tried it yet, but it looks promising.
- LocalAI: OpenAI compatible API to run LLM models locally on consumer grade hardware!
-
LLM specialized in programming ?
Turbopilot | open source LLM code completion engine and Copilot alternative
-
Locally running models like Chatgpt for Emacs?
This 6B parameters tool (based on README) could be runned with 4 Gb of RAM. https://github.com/ravenscroftj/turbopilot
-
What models and setup is good for generating code
there is an interesting link https://github.com/ravenscroftj/turbopilot/wiki/Converting-and-Quantizing-The-Models , just wondering if anyone have done this with 16b and put the weights somewhere
What are some alternatives?
OpenChat - LLMs custom-chatbots console ⚡
tabby - Self-hosted AI coding assistant
dalai - The simplest way to run LLaMA on your local machine
fauxpilot - FauxPilot - an open-source alternative to GitHub Copilot server
AlpacaDataCleaned - Alpaca dataset from Stanford, cleaned and curated
ggml - Tensor library for machine learning
gptcli - ChatGPT in command line with OpenAI API (gpt-3.5-turbo/gpt-4/gpt-4-32k)
prompt-engineering - ChatGPT Prompt Engineering for Developers - deeplearning.ai
StableLM - StableLM: Stability AI Language Models
telegram-chatgpt-concierge-bot - Interact with OpenAI's ChatGPT via Telegram and Voice.
loopgpt - Modular Auto-GPT Framework
llm-apex-agents - Run Large Language Model "Agents" in Salesforce apex