khoj
JARVIS
khoj | JARVIS | |
---|---|---|
50 | 52 | |
4,858 | 23,054 | |
2.8% | 0.7% | |
9.9 | 7.2 | |
about 10 hours ago | 9 days ago | |
Python | Python | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
khoj
-
Show HN: I made an app to use local AI as daily driver
There are already several RAG chat open source solutions available. Two that immediately come to mind are:
Danswer
https://github.com/danswer-ai/danswer
Khoj
https://github.com/khoj-ai/khoj
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
I'm a fan of Khoj. Been using it for months. https://github.com/khoj-ai/khoj
-
You probably don’t need to fine-tune LLMs
https://github.com/khoj-ai/khoj
This is the easiest I found, on here too.
-
Show HN: Khoj – Chat Offline with Your Second Brain Using Llama 2
Thanks for the feedback. Does your machine have a GPU? 32GB CPU RAM should be enough but GPU speeds up response time.
We have fixes for the seg fault[1] and improvement to the query speed[2] that should be released by end of day today[3].
Update khoj to version 0.10.1 with pip install --upgrade khoj-assistant to see if that improves your experience.
The number of documents/pages/entries doesn't scale memory utilization as quickly and doesn't affect the search, chat response time as much
[1]: The seg fault would occur when folks sent multiple chat queries at the same time. A lock and some UX improvements fixed that
[2]: The query time improvements are done by increasing batch size, to trade-off increased memory utilization for more speed
[3]: The relevant pull request for reference: https://github.com/khoj-ai/khoj/pull/393
-
A Review: Using Llama 2 to Chat with Notes on Consumer Hardware
We recently integrated Llama 2 into Khoj. I wanted to share a short real-world evaluation of using Llama 2 for the chat with docs use-cases and hear which models have worked best for you all. The standard benchmarks (ARC, HellaSwag, MMLU etc.) are not tuned for evaluating this
- FLaNK Stack Weekly for 17 July 2023
-
An open source AI search + chat assistant for your Notion workspace
Self-host your Notion assistant using the instructions here. You'll need Python >= 3.8 to get started.
-
When will we get JARVIS?
Here's an early example: https://github.com/khoj-ai/khoj
JARVIS
- FLaNK Stack 26 February 2024
-
Overview: AI Assembly Architectures
Jarvis: github.com/microsoft/JARVIS
-
When will we get JARVIS?
You can build it yourself now. https://github.com/microsoft/JARVIS
- How to build the Geth (networked intelligence, decentralized AGI)
-
Off-topic: What NVIDIA GPU do I need to run privateGPT or Alpaca-Lora for code translations, debugging, unit tests, etc?
https://github.com/microsoft/JARVIS (when ready says >=24GB VRAM)
-
Apple announces Apple Silicon Mac Pro powered by M2 Ultra
Can be. There are projects that run fully locally like Microsoft’s Jarvis. https://github.com/microsoft/JARVIS
-
April 2023
JARVIS, a system to connect LLMs with ML community (https://github.com/microsoft/JARVIS)
- Nvidia's GH200 AI supercomputers could build 'giant' AI models more powerful than GPT-4
-
A Lightweight HuggingGPT Implementation w/ Langchain + Thoughts on Why JARVIS Fails to Deliver
HuggingGPT is a clever idea to boost the capabilities of LLM Agents, and enable them to solve “complicated AI tasks with different domains and modalities”. In short, it uses ChatGPT to plan tasks, select models from Hugging Face (HF), format inputs, execute each subtask via the HF Inference API, and summarise the results. JARVIS tries to generalise this idea, and create a framework to “connect LLMs with the ML community”, which Microsoft Research claims “paves a new way towards advanced artificial intelligence”.
- Edit videos through intuitive ChatGPT conversations
What are some alternatives?
obsidian-smart-connections - Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3
AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
babyagi
llama-cpp-python - Python bindings for llama.cpp
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/AutoGPT]
obsidian-ava - Quickly format your notes with ChatGPT in Obsidian
visual-chatgpt - Official repo for the paper: Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models [Moved to: https://github.com/microsoft/TaskMatrix]
logseq-plugin-gpt3-openai - A plugin for GPT-3 AI assisted note taking in Logseq
dalai - The simplest way to run LLaMA on your local machine