markdown-embeddings-search
obsidian-copilot
markdown-embeddings-search | obsidian-copilot | |
---|---|---|
1 | 5 | |
2 | 445 | |
- | - | |
6.0 | 7.3 | |
about 1 month ago | 3 months ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
markdown-embeddings-search
-
Ask HN: Has Anyone Trained a personal LLM using their personal notes?
Not exactly what you're looking for but I a few months ago I spent a day building a llama-index pipeline against my markdown notes with a really privative note crawling implementation, and had surprisingly good results for question answering.
I don't use an org-roam note system but I've been working on a similar and highly opinionated note system that I'm always making tools for. And I'm always interested in seeing people's ideal note systems.
my crude WIP Obsidian / Markdown note RAG tool: https://github.com/bs7280/markdown-embeddings-search
obsidian-copilot
-
Ask HN: Has Anyone Trained a personal LLM using their personal notes?
hadn't seen your repo yet [1] - adding it to my list right now.
Your blog post is really neat on top - thanks for sharing
https://github.com/eugeneyan/obsidian-copilot
-
Obsidian-Copilot: A Prototype Assistant for Writing and Thinking
Um... can someone explain what this actually does?
In the video the user chooses the 'Copilot: Draft' action, and wow, it generates code...
...but, the 'draft' action [1] calls `/get_chunks` and then runs 'queryLLM' [2] which then just invokes 'https://api.openai.com/v1/chat/completions' directly.
So, generating text this way is 100% not interesting or relevant.
What's interesting here is how it's building the prompt to send to the openai-api.
So... can anyone shed some light on what the actual code [3] in get_chunks() does, and why you would... hm... I guess, do a lookup and pass the results to the openai api, instead of just the raw text?
The repo says: "You write a section header and the copilot retrieves relevant notes & docs to draft that section for you.", and you can see in the linked post [4], this is basically what the OP is trying to implement here; you write 'I want X', and the plugin (a bit like copilot) does a lookup of related documents, crafts a meta-prompt and passes the prompt to the openai api.
...but, it doesn't seem to do that. It seems to ignore your actual prompt, lookup related documents by embedding similarity... and then... pass those documents in as the prompt?
I'm pretty confused as to why you would want that.
It basically requires that you write your prompt separately before hand, so you can invoke it magically with a one-line prompt later. Did I misunderstand how this works?
[1] - https://github.com/eugeneyan/obsidian-copilot/blob/bdabdc422...
[2] - https://github.com/eugeneyan/obsidian-copilot/blob/bdabdc422...
[3] - https://github.com/eugeneyan/obsidian-copilot/blob/main/src/...
[4] - https://eugeneyan.com/writing/llm-experiments/#shortcomings-...
What are some alternatives?
obsidian-smart-connections - Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3
llmware - Providing enterprise-grade LLM-based development framework, tools, and fine-tuned models.
tonic_validate - Metrics to evaluate the quality of responses of your Retrieval Augmented Generation (RAG) applications.
chroma-langchain
ResuLLMe - Enhance your résumé with Large Language Models
autollm - Ship RAG based LLM web apps in seconds.