PrivateGPT4Linux
llm-chain
PrivateGPT4Linux | llm-chain | |
---|---|---|
23 | 3 | |
15 | 1,170 | |
- | 4.6% | |
4.1 | 8.6 | |
7 days ago | 11 days ago | |
Shell | Rust | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
PrivateGPT4Linux
- PrivateGPT: Interact with your documents using the power of GPT, 100% privately, no data leaks
-
Need guidance in this sea of information on how to set up a local AI
I found things like this dataset and LocalAI and I followed the article to get PrivateGPT and the GPT4ALL groovy.bin but I'm completely lost and it feels like the more I research the internet or ask BingAI for answers, the more questions I get instead. At this stage I don't know what goes where, if there's a difference between source documents and datasets, should I run this from my 2tb SSD? Should I have the data on my 8tb HDD? Will all this even work on my PC?
-
Several newb questions
No, as the same as the last question, It does not have access to anything except the model data itself. However, there are some approaches that can let LLMs have access LOCAL documents, which means if you can have a program that extracts data from the database into a local folder which contains TEXT files. This could also work for 2(I didn't mention it in 2 because online datas are REALLY big. It would take the model hours to give an answer. If the database is not large then there might be a shot. Check https://github.com/imartinez/privateGPT(Must be GPT4all compatible models sadly).
-
What solution would best suite a SaaS - for reading and answering data from PDF files uploaded by users
I've been doing exactly this with an open source repository called PrivateGPT imartinez/privateGPT: Interact privately with your documents using the power of GPT, 100% privately, no data leaks (github.com)
- How to run an open source AI model, offline, on my own computer?
- Check out my script which installs privateGPT for Linux!
-
are there anytools or frameworks similar to "langchain" or "llamaindexbut implemented or designed in a language other than python?
Not really, you will probably need to change the data location and the LLM provider in the example code to get it running. But you don't have to implement that yourself there are a couple projects that already do that like privateGPT. I use it for searching datasheets, got it up an running in a few hours and I'm pretty happy with it so far.
-
Intern tasked to make a "local" version of chatGPT for my work
PrivateGPT can do that.
- I've made privateGPT work for Linux check it out (documents)
- I've made privateGPT work for Linux check it out
llm-chain
What are some alternatives?
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
llama-node - Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.
libopenai - A Rust client for OpenAI's API
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
openai-client - OpenAI Dive is an unofficial async Rust library that allows you to interact with the OpenAI API.
Voyager - An Open-Ended Embodied Agent with Large Language Models
memex - Super-simple, fully Rust powered "memory" (doc store + semantic search) for LLM projects, semantic search, etc.
llm - An ecosystem of Rust libraries for working with large language models
opentau - Using Large Language Models for Gradual Type Inference
gorilla - Gorilla: An API store for LLMs
async-openai - Rust library for OpenAI