khoj
silicon
khoj | silicon | |
---|---|---|
50 | 2 | |
4,858 | 125 | |
2.8% | - | |
9.9 | 2.2 | |
about 18 hours ago | 2 months ago | |
Python | TypeScript | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
khoj
-
Show HN: I made an app to use local AI as daily driver
There are already several RAG chat open source solutions available. Two that immediately come to mind are:
Danswer
https://github.com/danswer-ai/danswer
Khoj
https://github.com/khoj-ai/khoj
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
I'm a fan of Khoj. Been using it for months. https://github.com/khoj-ai/khoj
-
You probably don’t need to fine-tune LLMs
https://github.com/khoj-ai/khoj
This is the easiest I found, on here too.
-
Show HN: Khoj – Chat Offline with Your Second Brain Using Llama 2
Thanks for the feedback. Does your machine have a GPU? 32GB CPU RAM should be enough but GPU speeds up response time.
We have fixes for the seg fault[1] and improvement to the query speed[2] that should be released by end of day today[3].
Update khoj to version 0.10.1 with pip install --upgrade khoj-assistant to see if that improves your experience.
The number of documents/pages/entries doesn't scale memory utilization as quickly and doesn't affect the search, chat response time as much
[1]: The seg fault would occur when folks sent multiple chat queries at the same time. A lock and some UX improvements fixed that
[2]: The query time improvements are done by increasing batch size, to trade-off increased memory utilization for more speed
[3]: The relevant pull request for reference: https://github.com/khoj-ai/khoj/pull/393
-
A Review: Using Llama 2 to Chat with Notes on Consumer Hardware
We recently integrated Llama 2 into Khoj. I wanted to share a short real-world evaluation of using Llama 2 for the chat with docs use-cases and hear which models have worked best for you all. The standard benchmarks (ARC, HellaSwag, MMLU etc.) are not tuned for evaluating this
- FLaNK Stack Weekly for 17 July 2023
-
An open source AI search + chat assistant for your Notion workspace
Self-host your Notion assistant using the instructions here. You'll need Python >= 3.8 to get started.
-
When will we get JARVIS?
Here's an early example: https://github.com/khoj-ai/khoj
silicon
-
Ask HN: AI for Personal Notes?
As of 2022, https://get.mem.ai/mem-x has one of the best AI for Personal Notes integrations. I'm sure 2023 will be Fruitful. I'm already seeing some Obsidian/Markdown experiments.
> making connections between notes > related notes in it's context.
mem -> similar mems
obsidian -> https://github.com/brianpetro/obsidian-smart-connections | https://github.com/deepfates/silicon
> asking questions/searching
mem -> search is NLP/AI by default
Markdown -> https://github.com/debanjum/khoj
Obsidian -> https://twitter.com/Sarah_A_Bentley/status/16110695760993362...
-> new (summary) notes based on (many) old notes
There are a lot of summarizers on the web. They work great on whole articles. The problem is, how do you summarize hundreds (and more) of independent/related smaller notes ?
-
I made a plugin that finds related files in your vault with AI
It's not in the Community Plugins repo yet, but you can install it manually from Github. You'll need your own OpenAI key.
What are some alternatives?
obsidian-smart-connections - Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3
logseq-plugin-gpt3-openai - A plugin for GPT-3 AI assisted note taking in Logseq
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
energetic-ai - EnergeticAI is TensorFlow.js, optimized for serverless environments, with fast cold-start, small module size, and pre-trained models.
llama-cpp-python - Python bindings for llama.cpp
ai-template - Mercury - Train your own custom GPT. Chat with any file, or website.
obsidian-ava - Quickly format your notes with ChatGPT in Obsidian
ai-chatbot - A full-featured, hackable Next.js AI chatbot built by Vercel
obsidian-weaver - Weaver is a useful Obsidian plugin that integrates ChatGPT/GPT-3 into your note-taking workflow. This plugin makes it easy to access AI-generated suggestions and insights within Obsidian, helping you improve your writing and brainstorming process.