-
local_llama
This repo is to showcase how you can run a model locally and offline, free of OpenAI dependencies.
-
LocalAI
:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
h2ogpt
Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
It's also worth checking out https://github.com/go-skynet/LocalAI it's a local LLM runner that has an OpenAI compatible API. I've gotten several apps now working against it that would otherwise require the paid OpenAI access. It was a bit weird to get it working with my GPU (it uses llama.cpp and it's cublas implementation) but once I did then it's been working pretty well.
Just spent the morning setting up imartinez/privateGPT: Interact privately with your documents using the power of GPT, 100% privately, no data leaks (github.com) on my machine, its pretty good but desperately needs GPU support (which is coming)
Yeah someone mentioned that on my other post about my project, looks like I’m a day late and a dollar short, you can use GPU with mine by following the instructions here if you care to get into it
This has GPU support and does same as privateGPT. https://github.com/h2oai/h2ogpt