Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Local_llama Alternatives
Similar projects and alternatives to local_llama
-
-
private-gpt
Interact with your documents using the power of GPT, 100% privately, no data leaks
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
LocalAI
:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
-
h2ogpt
Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
-
localGPT
Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
-
-
EmbedAI
An app to interact privately with your documents using the power of GPT, 100% privately, no data leaks
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
-
local_llama reviews and mentions
-
Discussion: Biggest Roadblocks to Deploy LLMs to Production
I work with AWS daily, terraform, Python and java creating and maintaining enterprise solutions. I have played with sagemaker but it is so expensive I hate to leave it up for longer than a day. I downloaded and created a chat with your docs (entirely in airplane mode) here point being that I’ve hosted models both locally and in the cloud. But just ended up sticking to API calls as it’s so cheap
-
You can now chat with your documents privately!
I posted the speed of mine in the readme https://github.com/jlonge4/local_llama
-
Textgen webui for gpt_chatwithPDF
I would like to use this tool https://github.com/jlonge4/gpt_chatwithPDF/blob/main/gpt_chat_api.py but unfortunately the local version (https://github.com/jlonge4/local_llama) is bound to the CPU and thus quiet slow. Is there any way i could get textgenwebui working with the above stated tool?
- Local GPT (completely offline and no OpenAI!)
-
Offline llama
Ask and you shall receive here
Code here if interested
-
A note from our sponsor - InfluxDB
www.influxdata.com | 17 Apr 2024
Stats
jlonge4/local_llama is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of local_llama is Python.