llmware VS obsidian-copilot

Compare llmware vs obsidian-copilot and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
llmware obsidian-copilot
9 5
3,173 440
6.7% -
9.8 7.3
7 days ago 2 months ago
Python Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

llmware

Posts with mentions or reviews of llmware. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-06.

obsidian-copilot

Posts with mentions or reviews of obsidian-copilot. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-03.
  • Ask HN: Has Anyone Trained a personal LLM using their personal notes?
    10 projects | news.ycombinator.com | 3 Apr 2024
    hadn't seen your repo yet [1] - adding it to my list right now.

    Your blog post is really neat on top - thanks for sharing

    https://github.com/eugeneyan/obsidian-copilot

  • Obsidian-Copilot: A Prototype Assistant for Writing and Thinking
    1 project | /r/patient_hackernews | 13 Jun 2023
    1 project | /r/hackernews | 13 Jun 2023
    5 projects | news.ycombinator.com | 13 Jun 2023
    Um... can someone explain what this actually does?

    In the video the user chooses the 'Copilot: Draft' action, and wow, it generates code...

    ...but, the 'draft' action [1] calls `/get_chunks` and then runs 'queryLLM' [2] which then just invokes 'https://api.openai.com/v1/chat/completions' directly.

    So, generating text this way is 100% not interesting or relevant.

    What's interesting here is how it's building the prompt to send to the openai-api.

    So... can anyone shed some light on what the actual code [3] in get_chunks() does, and why you would... hm... I guess, do a lookup and pass the results to the openai api, instead of just the raw text?

    The repo says: "You write a section header and the copilot retrieves relevant notes & docs to draft that section for you.", and you can see in the linked post [4], this is basically what the OP is trying to implement here; you write 'I want X', and the plugin (a bit like copilot) does a lookup of related documents, crafts a meta-prompt and passes the prompt to the openai api.

    ...but, it doesn't seem to do that. It seems to ignore your actual prompt, lookup related documents by embedding similarity... and then... pass those documents in as the prompt?

    I'm pretty confused as to why you would want that.

    It basically requires that you write your prompt separately before hand, so you can invoke it magically with a one-line prompt later. Did I misunderstand how this works?

    [1] - https://github.com/eugeneyan/obsidian-copilot/blob/bdabdc422...

    [2] - https://github.com/eugeneyan/obsidian-copilot/blob/bdabdc422...

    [3] - https://github.com/eugeneyan/obsidian-copilot/blob/main/src/...

    [4] - https://eugeneyan.com/writing/llm-experiments/#shortcomings-...

What are some alternatives?

When comparing llmware and obsidian-copilot you can also consider the following projects:

llm-client-sdk - SDK for using LLM

obsidian-smart-connections - Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3

pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.

tonic_validate - Metrics to evaluate the quality of responses of your Retrieval Augmented Generation (RAG) applications.

inference - A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.

chroma-langchain

openstatus - 🏓 The open-source synthetic & real user monitoring platform 🏓

ResuLLMe - Enhance your résumé with Large Language Models

megabots - 🤖 State-of-the-art, production ready LLM apps made mega-easy, so you don't have to build them from scratch 🤯 Create a bot, now 🫵

markdown-embeddings-search - Obisidan notes to pinecone embeddings plus other files in effor to learn llama_index

SimplyRetrieve - Lightweight chat AI platform featuring custom knowledge, open-source LLMs, prompt-engineering, retrieval analysis. Highly customizable. For Retrieval-Centric & Retrieval-Augmented Generation.

autollm - Ship RAG based LLM web apps in seconds.