gpt_index
khoj
gpt_index | khoj | |
---|---|---|
48 | 50 | |
7,332 | 4,858 | |
- | 4.2% | |
9.8 | 9.9 | |
about 1 year ago | 4 days ago | |
Python | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt_index
-
Basic links to get started with Prompt Programming
LLAMA Index Github repository
-
Leak: Metas GPT-Herausforderer LLaMA als Torrent verfügbar
Zuwendungen kommen auch so langsam ( LLamaIndex ) https://github.com/jerryjliu/gpt_index
-
Large language models are having their Stable Diffusion moment
This is exactly what LlamaIndex is meant to solve!
A set of data structures to augment LLM's with your data: https://github.com/jerryjliu/gpt_index
-
ChatGPT's API Is So Good and Cheap, It Makes Most Text Generating AI Obsolete
This is what we've designed LlamaIndex for! https://github.com/jerryjliu/gpt_index. Designed to help you "index" over a large doc corpus in different ways for use with LLM prompts.
-
Is there a way I can have ChatGPT look at a document of mine?
https://github.com/jerryjliu/gpt_index might be close to what you need.
-
AI is making it easier to create more noise, when all I want is good search
I would start with https://gpt-index.readthedocs.io/en/latest/ and https://langchain.readthedocs.io/en/latest/
- GitHub - jerryjliu/gpt_index: LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data.
-
Using OpenAI with self hosted knowledge database
People have been doing this with https://github.com/jerryjliu/gpt_index
-
Long form content
Here is a link to the repository. Take a look at the overview section of the readme. https://github.com/jerryjliu/gpt_index
-
LLaMA: A foundational, 65B-parameter large language model
(creator of gpt index / llamaindex here https://github.com/jerryjliu/gpt_index)
Funny that we had just rebranded our tool from GPT Index to LlamaIndex about a week ago to avoid potential trademark issues with OpenAI, and turns out Meta has similar ideas around LLM+llama puns :). Must mean the name is good though!
Also very excited to try plugging in the LLaMa model into LlamaIndex, will report the results.
khoj
-
Show HN: I made an app to use local AI as daily driver
There are already several RAG chat open source solutions available. Two that immediately come to mind are:
Danswer
https://github.com/danswer-ai/danswer
Khoj
https://github.com/khoj-ai/khoj
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
I'm a fan of Khoj. Been using it for months. https://github.com/khoj-ai/khoj
-
You probably don’t need to fine-tune LLMs
https://github.com/khoj-ai/khoj
This is the easiest I found, on here too.
-
Show HN: Khoj – Chat Offline with Your Second Brain Using Llama 2
Thanks for the feedback. Does your machine have a GPU? 32GB CPU RAM should be enough but GPU speeds up response time.
We have fixes for the seg fault[1] and improvement to the query speed[2] that should be released by end of day today[3].
Update khoj to version 0.10.1 with pip install --upgrade khoj-assistant to see if that improves your experience.
The number of documents/pages/entries doesn't scale memory utilization as quickly and doesn't affect the search, chat response time as much
[1]: The seg fault would occur when folks sent multiple chat queries at the same time. A lock and some UX improvements fixed that
[2]: The query time improvements are done by increasing batch size, to trade-off increased memory utilization for more speed
[3]: The relevant pull request for reference: https://github.com/khoj-ai/khoj/pull/393
-
A Review: Using Llama 2 to Chat with Notes on Consumer Hardware
We recently integrated Llama 2 into Khoj. I wanted to share a short real-world evaluation of using Llama 2 for the chat with docs use-cases and hear which models have worked best for you all. The standard benchmarks (ARC, HellaSwag, MMLU etc.) are not tuned for evaluating this
- FLaNK Stack Weekly for 17 July 2023
-
An open source AI search + chat assistant for your Notion workspace
Self-host your Notion assistant using the instructions here. You'll need Python >= 3.8 to get started.
-
When will we get JARVIS?
Here's an early example: https://github.com/khoj-ai/khoj
What are some alternatives?
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
obsidian-smart-connections - Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3
llama - Inference code for Llama models
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
awesome-chatgpt-prompts - This repo includes ChatGPT prompt curation to use ChatGPT better.
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llama-cpp-python - Python bindings for llama.cpp
nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.
obsidian-ava - Quickly format your notes with ChatGPT in Obsidian
finetuner - :dart: Task-oriented embedding tuning for BERT, CLIP, etc.
logseq-plugin-gpt3-openai - A plugin for GPT-3 AI assisted note taking in Logseq