obsidian-copilot
llmware
obsidian-copilot | llmware | |
---|---|---|
5 | 9 | |
445 | 3,839 | |
- | 22.9% | |
7.3 | 9.8 | |
3 months ago | 3 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
obsidian-copilot
-
Ask HN: Has Anyone Trained a personal LLM using their personal notes?
hadn't seen your repo yet [1] - adding it to my list right now.
Your blog post is really neat on top - thanks for sharing
https://github.com/eugeneyan/obsidian-copilot
-
Obsidian-Copilot: A Prototype Assistant for Writing and Thinking
Um... can someone explain what this actually does?
In the video the user chooses the 'Copilot: Draft' action, and wow, it generates code...
...but, the 'draft' action [1] calls `/get_chunks` and then runs 'queryLLM' [2] which then just invokes 'https://api.openai.com/v1/chat/completions' directly.
So, generating text this way is 100% not interesting or relevant.
What's interesting here is how it's building the prompt to send to the openai-api.
So... can anyone shed some light on what the actual code [3] in get_chunks() does, and why you would... hm... I guess, do a lookup and pass the results to the openai api, instead of just the raw text?
The repo says: "You write a section header and the copilot retrieves relevant notes & docs to draft that section for you.", and you can see in the linked post [4], this is basically what the OP is trying to implement here; you write 'I want X', and the plugin (a bit like copilot) does a lookup of related documents, crafts a meta-prompt and passes the prompt to the openai api.
...but, it doesn't seem to do that. It seems to ignore your actual prompt, lookup related documents by embedding similarity... and then... pass those documents in as the prompt?
I'm pretty confused as to why you would want that.
It basically requires that you write your prompt separately before hand, so you can invoke it magically with a one-line prompt later. Did I misunderstand how this works?
[1] - https://github.com/eugeneyan/obsidian-copilot/blob/bdabdc422...
[2] - https://github.com/eugeneyan/obsidian-copilot/blob/bdabdc422...
[3] - https://github.com/eugeneyan/obsidian-copilot/blob/main/src/...
[4] - https://eugeneyan.com/writing/llm-experiments/#shortcomings-...
llmware
-
More Agents Is All You Need: LLMs performance scales with the number of agents
I couldn't agree more. You should check out LLMWare's SLIM agents (https://github.com/llmware-ai/llmware/tree/main/examples/SLI...). It's focusing on pretty much exactly this and chaining multiple local LLMs together.
A really good topic that ties in with this is the need for deterministic sampling (I may have the terminology a bit incorrect) depending on what the model is indended for. The LLMWare team did a good 2 part video on this here as well (https://www.youtube.com/watch?v=7oMTGhSKuNY)
I think dedicated miniture LLMs are the way forward.
Disclaimer - Not affiliated with them in any way, just think it's a really cool project.
- FLaNK Stack Weekly 19 Feb 2024
-
Show HN: LLMWare – Small Specialized Function Calling 1B LLMs for Multi-Step RAG
I've been building upon the LLMWare project - https://github.com/llmware-ai/llmware - for the past 3 months. The ability to run these models locally on standard consumer CPUs, along with the abstraction provided to chop and change between models and different processes is really cool.
I think these SLIM models are the start of something powerful for automating internal business processes and enhancing the use case of LLMs. Still kinda blows my mind that this is all running on my 3900X and also runs on a bog standard Hetzner server with no GPU.
- Show HN: LLMWare – Integrated Solution for RAG in Finance and Legal
- Llmware.ai – AI Tools for Financial, Legal and Compliance
-
Open Source Advent Fun Wraps Up!
16. LLMWare by Ai Bloks | Github | tutorial
- FLaNK Stack Weekly 16 October 2023
- Strategy for PDF data extraction and Display
What are some alternatives?
obsidian-smart-connections - Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3
llm-client-sdk - SDK for using LLM