khoj
jan
khoj | jan | |
---|---|---|
50 | 14 | |
4,858 | 17,643 | |
2.8% | 17.8% | |
9.9 | 10.0 | |
about 9 hours ago | 4 days ago | |
Python | TypeScript | |
GNU Affero General Public License v3.0 | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
khoj
-
Show HN: I made an app to use local AI as daily driver
There are already several RAG chat open source solutions available. Two that immediately come to mind are:
Danswer
https://github.com/danswer-ai/danswer
Khoj
https://github.com/khoj-ai/khoj
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
I'm a fan of Khoj. Been using it for months. https://github.com/khoj-ai/khoj
-
You probably don’t need to fine-tune LLMs
https://github.com/khoj-ai/khoj
This is the easiest I found, on here too.
-
Show HN: Khoj – Chat Offline with Your Second Brain Using Llama 2
Thanks for the feedback. Does your machine have a GPU? 32GB CPU RAM should be enough but GPU speeds up response time.
We have fixes for the seg fault[1] and improvement to the query speed[2] that should be released by end of day today[3].
Update khoj to version 0.10.1 with pip install --upgrade khoj-assistant to see if that improves your experience.
The number of documents/pages/entries doesn't scale memory utilization as quickly and doesn't affect the search, chat response time as much
[1]: The seg fault would occur when folks sent multiple chat queries at the same time. A lock and some UX improvements fixed that
[2]: The query time improvements are done by increasing batch size, to trade-off increased memory utilization for more speed
[3]: The relevant pull request for reference: https://github.com/khoj-ai/khoj/pull/393
-
A Review: Using Llama 2 to Chat with Notes on Consumer Hardware
We recently integrated Llama 2 into Khoj. I wanted to share a short real-world evaluation of using Llama 2 for the chat with docs use-cases and hear which models have worked best for you all. The standard benchmarks (ARC, HellaSwag, MMLU etc.) are not tuned for evaluating this
- FLaNK Stack Weekly for 17 July 2023
-
An open source AI search + chat assistant for your Notion workspace
Self-host your Notion assistant using the instructions here. You'll need Python >= 3.8 to get started.
-
When will we get JARVIS?
Here's an early example: https://github.com/khoj-ai/khoj
jan
-
AI enthusiasm - episode #2🚀
Jan.ai is a 100% local alternative to ChatGPT: you can download LLMs and run them directly from within the application, or even prompting them and retrieving their response via API.
- Ask HN: What is the current (Apr. 2024) gold standard of running an LLM locally?
-
Show HN: I made an app to use local AI as daily driver
It would be cool to have the option to use the OpenAI API as well in the same interface. http://jan.ai does this, so that's what I'm using at the moment.
- Jan – Bringing AI to Your Desktop
- FLaNK 15 Jan 2024
-
Why the M2 is more advanced that it seemed
Was it this? I haven’t tried it yet but it does look nice.
https://jan.ai/
- Jan is an open source alternative to ChatGPT that runs 100% offline
- Open-Source ChatGPT Alternative Jan
- Run LLMs Locally with an OpenAI API
- AI on my desktop—what's yours?
What are some alternatives?
obsidian-smart-connections - Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3
unstructured - Open source libraries and APIs to build custom preprocessing pipelines for labeling, training, or production machine learning pipelines.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
chainlit - Build Conversational AI in minutes ⚡️
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
FLaNK-VectorDB - NiFi and Vector Databases
llama-cpp-python - Python bindings for llama.cpp
modelfusion-llamacpp-nextjs-starter - Starter examples for using Next.js and the Vercel AI SDK with Llama.cpp and ModelFusion.
obsidian-ava - Quickly format your notes with ChatGPT in Obsidian
obsidian-local-llm - Obsidian Local LLM is a plugin for Obsidian that provides access to a powerful neural network, allowing users to generate text in a wide range of styles and formats using a local LLM.
logseq-plugin-gpt3-openai - A plugin for GPT-3 AI assisted note taking in Logseq
VOLlama - An accessible chat client for Ollama