Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Obsidian-copilot Alternatives
Similar projects and alternatives to obsidian-copilot
-
github-orgmode-tests
This is a test project where you can explore how github interprets Org-mode files
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
qdrant
Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
-
obsidian-smart-connections
Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
tonic_validate
Metrics to evaluate the quality of responses of your Retrieval Augmented Generation (RAG) applications.
-
til
Personal Wiki of Interesting things I learn every day at the intersection of software, life & stuff a.k.a my second brain 🧠️ (by Bhupesh-V)
-
markdown-embeddings-search
Obisidan notes to pinecone embeddings plus other files in effor to learn llama_index
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
obsidian-copilot reviews and mentions
-
Ask HN: Has Anyone Trained a personal LLM using their personal notes?
hadn't seen your repo yet [1] - adding it to my list right now.
Your blog post is really neat on top - thanks for sharing
https://github.com/eugeneyan/obsidian-copilot
-
Obsidian-Copilot: A Prototype Assistant for Writing and Thinking
Um... can someone explain what this actually does?
In the video the user chooses the 'Copilot: Draft' action, and wow, it generates code...
...but, the 'draft' action [1] calls `/get_chunks` and then runs 'queryLLM' [2] which then just invokes 'https://api.openai.com/v1/chat/completions' directly.
So, generating text this way is 100% not interesting or relevant.
What's interesting here is how it's building the prompt to send to the openai-api.
So... can anyone shed some light on what the actual code [3] in get_chunks() does, and why you would... hm... I guess, do a lookup and pass the results to the openai api, instead of just the raw text?
The repo says: "You write a section header and the copilot retrieves relevant notes & docs to draft that section for you.", and you can see in the linked post [4], this is basically what the OP is trying to implement here; you write 'I want X', and the plugin (a bit like copilot) does a lookup of related documents, crafts a meta-prompt and passes the prompt to the openai api.
...but, it doesn't seem to do that. It seems to ignore your actual prompt, lookup related documents by embedding similarity... and then... pass those documents in as the prompt?
I'm pretty confused as to why you would want that.
It basically requires that you write your prompt separately before hand, so you can invoke it magically with a one-line prompt later. Did I misunderstand how this works?
[1] - https://github.com/eugeneyan/obsidian-copilot/blob/bdabdc422...
[2] - https://github.com/eugeneyan/obsidian-copilot/blob/bdabdc422...
[3] - https://github.com/eugeneyan/obsidian-copilot/blob/main/src/...
[4] - https://eugeneyan.com/writing/llm-experiments/#shortcomings-...
-
A note from our sponsor - InfluxDB
www.influxdata.com | 30 Apr 2024
Stats
eugeneyan/obsidian-copilot is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of obsidian-copilot is Python.
Sponsored