quivr
xTuring
quivr | xTuring | |
---|---|---|
22 | 31 | |
32,917 | 2,524 | |
7.7% | 0.9% | |
9.9 | 8.4 | |
1 day ago | about 1 month ago | |
TypeScript | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
quivr
-
privateGPT VS quivr - a user suggested alternative
2 projects | 12 Jan 2024
-
First 15 Open Source Advent projects
3. Quivr | GitHub | tutorial
- What's the catch with codecanyon?
-
Went down the rabbit hole of 100% local RAG, it works but are there better options?
I used Ollama (with Mistral 7B) and Quivr to get a local RAG up and running and it works fine, but was surprised to find there are no easy user-friendly ways to do it. Most other local LLM UIs don't implement this use case (I looked here), even though it is one of the most useful local LLM use-cases I can think of: search and summarize information from sensitive / confidential documents.
- FLaNK Stack Weekly for 21 August 2023
-
Discord Is Not Documentation
In my opinion LLM based document search tools such as OSS Quivr may be better suited for documentation search for startups.
A highly customed Quivr with one of the 'Open Source LLMs' may provides great 'semantic search' for product documentation.
https://github.com/StanGirard/quivr
- Quivr
-
I built an open source website that lets you upload large files such as academic PDFs or books and ask ChatGPT questions based on your custom knowledge base. So far, I've tried it with long ebooks like Plato's Republic, old letters, and random academic PDFs, and it works shockingly well.
Hey thanks for creating this, will try later if i have time. Meanwhile, do you try some of other second brain app such as this, and how was the comparison? The one i mentioned was trending on github so i think its decent (been playing with it since last week or so, also). But i already starred your repo so i can come back later.
- Quivr – Your Second Brain, Empowered by Generative AI
- Quivr: Chatting with your own docs
xTuring
-
I'm developing an open-source AI tool called xTuring, enabling anyone to construct a Language Model with just 5 lines of code. I'd love to hear your thoughts!
Explore the project on GitHub here.
-
LLaMA 2 fine-tuning made easier and faster
If you're curious, I encourage you to: - Dive deeper with the LLaMA 2 tutorial here. - Explore the project on GitHub here. - Connect with our community on Discord here.
-
RAG vs. Fine-Tuning
If you want best performance, you need to do both RAG and fine-tuning very well. There are plenty of resources on doing fine-tuning thought. I'm one of the contributors to https://github.com/stochasticai/xturing project focused on fine-tuning LLMs. You can find help in the discord channel listed on the GitHub.
- Build, customize and control your own personal LLMs via xTuring OSS
-
Finetuning LLaMA 2 (the base models) ?
What tools do you use and achieved great results ? … For me i have tried xturing and SFTTrainer and they got me a semi okay results.
-
Finetuning using Google Colab (Free Tier)
Code: https://github.com/stochasticai/xTuring/blob/main/examples/llama/llama_lora_int8.py Colab: https://colab.research.google.com/drive/1SQUXq1AMZPSLD4mk3A3swUIc6Y2dclme?usp=sharing
-
I would like to try my hand at finetuning some models. What is the best way to start? I have some questions that I'd appreciate your help on.
We are a group of researchers out of Harvard working on open-source library called xTuring, focused on fine-tuning LLMs: https://github.com/stochasticai/xturing.
-
Fine tuning on my tweets
Fine tuning I was thinking about using this (low GPU memory footprint): https://github.com/stochasticai/xturing/blob/main/examples/int4_finetuning/README.md
-
Colab for finetuning llama models in 4-bit?
I can't speak for QLORA, as I haven't had a chance to get an implementation working, but I've had success with StochasticAI's Xturing. It's by far the most streamlined method of finetuning I've come across, and they offer int8 and int4 fintuning (but only for llama-7B).
- Just wanna say this.
What are some alternatives?
localGPT - Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
axolotl - Go ahead and axolotl questions
chart-gpt - AI tool to build charts based on text input
FinGPT - FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace.
Flowise - Drag & drop UI to build your customized LLM flow
awesome-totally-open-chatgpt - A list of totally open alternatives to ChatGPT
databerry - The no-code platform for building custom LLM Agents
Meshtasticator - Discrete-event and interactive simulator for Meshtastic.
vault-ai - OP Vault ChatGPT: Give ChatGPT long-term memory using the OP Stack (OpenAI + Pinecone Vector Database). Upload your own custom knowledge base files (PDF, txt, epub, etc) using a simple React frontend.
Zicklein - Finetuning instruct-LLaMA on german datasets.
khoj - Your AI second brain. A copilot to get answers to your questions, whether they be from your own notes or from the internet. Use powerful, online (e.g gpt4) or private, local (e.g mistral) LLMs. Self-host locally or use our web app. Access from Obsidian, Emacs, Desktop app, Web or Whatsapp.
safetensors_util - Utility for Safetensors Files