khoj
llamafile
khoj | llamafile | |
---|---|---|
50 | 34 | |
4,858 | 14,839 | |
2.8% | 22.1% | |
9.9 | 9.6 | |
about 14 hours ago | 1 day ago | |
Python | C++ | |
GNU Affero General Public License v3.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
khoj
-
Show HN: I made an app to use local AI as daily driver
There are already several RAG chat open source solutions available. Two that immediately come to mind are:
Danswer
https://github.com/danswer-ai/danswer
Khoj
https://github.com/khoj-ai/khoj
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
I'm a fan of Khoj. Been using it for months. https://github.com/khoj-ai/khoj
-
You probably don’t need to fine-tune LLMs
https://github.com/khoj-ai/khoj
This is the easiest I found, on here too.
-
Show HN: Khoj – Chat Offline with Your Second Brain Using Llama 2
Thanks for the feedback. Does your machine have a GPU? 32GB CPU RAM should be enough but GPU speeds up response time.
We have fixes for the seg fault[1] and improvement to the query speed[2] that should be released by end of day today[3].
Update khoj to version 0.10.1 with pip install --upgrade khoj-assistant to see if that improves your experience.
The number of documents/pages/entries doesn't scale memory utilization as quickly and doesn't affect the search, chat response time as much
[1]: The seg fault would occur when folks sent multiple chat queries at the same time. A lock and some UX improvements fixed that
[2]: The query time improvements are done by increasing batch size, to trade-off increased memory utilization for more speed
[3]: The relevant pull request for reference: https://github.com/khoj-ai/khoj/pull/393
-
A Review: Using Llama 2 to Chat with Notes on Consumer Hardware
We recently integrated Llama 2 into Khoj. I wanted to share a short real-world evaluation of using Llama 2 for the chat with docs use-cases and hear which models have worked best for you all. The standard benchmarks (ARC, HellaSwag, MMLU etc.) are not tuned for evaluating this
- FLaNK Stack Weekly for 17 July 2023
-
An open source AI search + chat assistant for your Notion workspace
Self-host your Notion assistant using the instructions here. You'll need Python >= 3.8 to get started.
-
When will we get JARVIS?
Here's an early example: https://github.com/khoj-ai/khoj
llamafile
- llamafile v0.8
-
Mistral AI Launches New 8x22B Moe Model
I think the llamafile[0] system works the best. Binary works on the command line or launches a mini webserver. Llamafile offers builds of Mixtral-8x7B-Instruct, so presumably they may package this one up as well (potentially a quantized format).
You would have to confirm with someone deeper in the ecosystem, but I think you should be able to run this new model as is against a llamafile?
[0] https://github.com/Mozilla-Ocho/llamafile
-
Apple Explores Home Robotics as Potential 'Next Big Thing'
Thermostats: https://www.sinopetech.com/en/products/thermostat/
I haven't tried running a local text-to-speech engine backed by an LLM to control Home Assistant. Maybe someone is working on this already?
TTS: https://github.com/SYSTRAN/faster-whisper
LLM: https://github.com/Mozilla-Ocho/llamafile/releases
LLM: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-D...
It would take some tweaking to get the voice commands working correctly.
-
LLaMA Now Goes Faster on CPUs
While I did not succeed in making the matmul code from https://github.com/Mozilla-Ocho/llamafile/blob/main/llamafil... work in isolation, I compared eigen, openblas, and mkl: https://gist.github.com/Dobiasd/e664c681c4a7933ef5d2df7caa87...
In this (very primitive!) benchmark, MKL was a bit better than eigen (~10%) on my machine (i5-6600).
Since the article https://justine.lol/matmul/ compared the new kernels with MLK, we can (by transitivity) compare the new kernels with Eigen this way, at least very roughly for this one use-case.
-
Llamafile 0.7 Brings AVX-512 Support: 10x Faster Prompt Eval Times for AMD Zen 4
Yes, they're just ZIP files that also happen to be actually portable executables.
https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file...
-
Show HN: I made an app to use local AI as daily driver
have you seen llamafile[0]?
[0] https://github.com/Mozilla-Ocho/llamafile
- FLaNK Stack 26 February 2024
-
Gemma.cpp: lightweight, standalone C++ inference engine for Gemma models
llama.cpp has integrated gemma support. So you can use llamafile for this. It is a standalone executable that is portable across most popular OSes.
https://github.com/Mozilla-Ocho/llamafile/releases
So, download the executable from the releases page under assets. You want either just main or just server. Don't get the huge ones with the model inlined in the file. The executable is about 30MB in size,
https://github.com/Mozilla-Ocho/llamafile/releases/download/...
-
Ollama releases OpenAI API compatibility
The improvements in ease of use for locally hosting LLMs over the last few months have been amazing. I was ranting about how easy https://github.com/Mozilla-Ocho/llamafile is just a few hours ago [1]. Now I'm torn as to which one to use :)
1: Quite literally hours ago: https://euri.ca/blog/2024-llm-self-hosting-is-easy-now/
-
Localllm lets you develop gen AI apps on local CPUs
Slightly off topic, here is the best local llama.cpp wrapper I've run into:
https://github.com/Mozilla-Ocho/llamafile
You can download any .gguf model (not just the ones in their examples) and run it locally (as long as you have the ram for it). I was running 7B models with ease on an old FX8350 and now 13B models on a 5600X (32GB RAM on both machines).
This wrapper spins up a local web server that runs a simple web frontend to use immediately with no code, but also exposes an OpenAI compatible API for dev work and alt frontends (like SillyTavern).
What are some alternatives?
obsidian-smart-connections - Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
langchain - 🦜🔗 Build context-aware reasoning applications
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
ollama-webui - ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI) [Moved to: https://github.com/open-webui/open-webui]
llama-cpp-python - Python bindings for llama.cpp
LLaVA - [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
obsidian-ava - Quickly format your notes with ChatGPT in Obsidian
safetensors - Simple, safe way to store and distribute tensors
logseq-plugin-gpt3-openai - A plugin for GPT-3 AI assisted note taking in Logseq
llama.cpp - LLM inference in C/C++