simpleAI
vGPU_LicenseBypass
simpleAI | vGPU_LicenseBypass | |
---|---|---|
11 | 3 | |
323 | 212 | |
- | - | |
7.3 | 0.0 | |
12 months ago | over 1 year ago | |
Python | PowerShell | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
simpleAI
-
[P] I got fed up with LangChain, so I made a simple open-source alternative for building Python AI apps as easy and intuitive as possible.
Not related to my own project SimpleAI despite the name, but looks like we can easily make the two work together, to keep it « simple ». Nice work!
-
Run and create custom ChatGPT-like bots with OpenChat
Using this as an opportunity to mention my own related project, perhaps it can end up on your nice list one day. :)
https://github.com/lhenault/SimpleAI
- [D] OpenAI API vs. Open Source Self hosted for AI Startups
-
StableLM released
You could have a look at a project I’ve been working on, SimpleAI, doing exactly this by replicating the OpenAI endpoints (you can then use their JS client for integration). Adding StableLM should be straightforward, I plan to add it to the examples in the upcoming days once I have a bit of time.
-
[P] LoopGPT: A Modular Auto-GPT Framework
I’ve built SimpleAI with exactly these kinds of use cases in mind. That should allow supporting any model with minimal / no change to your project. Good job and good luck with LoopGPT, that looks nice!
-
Using the API in Node
You could give this a shot: https://github.com/lhenault/simpleAI
-
[D] Would a Tesla M40 provide cheap inference acceleration for self-hosted LLMs?
I don't know if this applies to your use case but this would probably work if you are looking for an llm to help with programming. Haven't really played around with it but this may work for general llm tasks, it doesn't have a web UI though.
-
Alpaca, LLaMa, Vicuna [D]
As per llama.cpp specifically, you can indeed add any model, it's just a matter of doing a bit of glue code and declaring it in your models.toml config. It's quite straightforward thanks to some provided tools for Python (see here for instance). For any other language it's a matter of integrating it through the gRPC interface (which shouldn't be too hard for Llama.cpp if you're comfortable in C++). I'm planning to also add support for REST for model in the backend at some point too.
-
[D] Is there currently anything comparable to the OpenAI API?
Shameless plug but I’ve been recently working on SimpleAI, a project replicating the main endpoints from OpenAI API, allowing you to seamlessly switch from their API to your own one, as it’s compatible with OpenAI client.
-
[P] SimpleAI : A self-hosted alternative to OpenAI API
I wanted to share with you SimpleAI, a self-hosted alternative to OpenAI API.
vGPU_LicenseBypass
- [D] Would a Tesla M40 provide cheap inference acceleration for self-hosted LLMs?
-
Proxmox VGPU issues
I got my hands on a Tesla P4 and stuffed it into a Cisco c220 M4 running Proxmox. the plan was to split it between a windows VM for "cloud" gaming, and a ubuntu VM for plex. I have set up the windows VM, and followed the instructions at https://gitlab.com/polloloco/vgpu-proxmox and everything seemed to work at first. After a while, graphics performance would suffer, and exiting a game would show a popup from nvidia about reduced performance due to a missing license. I followed the instructions on here, https://github.com/KrutavShah/vGPU_LicenseBypass/issues/2, and that killed the popup, but not the slowdown. on a 2nd read, sounds like that fix is only for v14, and I installed v15. Anyone have any ideas?
-
vGPU with ESXI suggestions
I don't know if this works under ESXI but have a quick look at https://github.com/KrutavShah/vGPU_LicenseBypass
What are some alternatives?
OpenChat - LLMs custom-chatbots console ⚡
vgpu-proxmox
dalai - The simplest way to run LLaMA on your local machine
nvidia-docker - Build and run Docker containers leveraging NVIDIA GPUs
AlpacaDataCleaned - Alpaca dataset from Stanford, cleaned and curated
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
gptcli - ChatGPT in command line with OpenAI API (gpt-3.5-turbo/gpt-4/gpt-4-32k)
StableLM - StableLM: Stability AI Language Models
loopgpt - Modular Auto-GPT Framework
turbopilot - Turbopilot is an open source large-language-model based code completion engine that runs locally on CPU
gpt-jargon - Jargon is a natural language programming language specified and executed by LLMs like GPT-4.
chatgpt-md - A (nearly) seamless integration of ChatGPT into Obsidian.