Our great sponsors
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
gp.nvim
Gp.nvim (GPT prompt) Neovim AI plugin: ChatGPT sessions & Instructable text/code operations & Speech to text [OpenAI]
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
org-ai
Emacs as your personal AI assistant. Use LLMs such as ChatGPT or LLaMA for text generation or DALL-E and Stable Diffusion for image generation. Also supports speech input / output.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Here's someone else getting something similar.
https://github.com/jart/emacs-copilot/issues/2
Yeah and there's already a well-known (at least I already knew about it) package started in 2022 called "copilot" for Emacs that is actually a client for GitHub Copilot: https://github.com/zerolfx/copilot.el
Given the lack of namespacing in Elisp (or, rather, the informal namespacing conventions by which these two packages collide) it's unfortunate that this package chose the same name.
Also worth knowing about in this space is ellama: https://github.com/s-kostyaev/ellama which uses the LLM package: https://github.com/ahyatt/llm#ollama to talk to ollama, and while ellama doesn't currently support talking over the network to ollama it also doesn't look like that would be a hard thing to add.
Also worth knowing about in this space is ellama: https://github.com/s-kostyaev/ellama which uses the LLM package: https://github.com/ahyatt/llm#ollama to talk to ollama, and while ellama doesn't currently support talking over the network to ollama it also doesn't look like that would be a hard thing to add.
Also worth checking out for more general use of LLMs in emacs: https://github.com/karthink/gptel
There's also https://github.com/David-Kunz/gen.nvim which works locally with ollama and eg. mistral 7B.
Any experience/comparison between them?
I don’t have experience with gp.nvim, but I liked David Kunz nvim quite a bit. I ended up forking it into a little pet project so that I could change it a bit more into what I wanted.
I love being able to use ollama, but wanted to be able switch to using GPT4 if I needed. I don’t really think automatic replacement is very useful because of how often I need to iterate a response. For me, a better replacement method is to visual highlight in the buffer and hit enter. That way you can iterate with the LLM if needed.
Also a bit more fine control with settings like system message, temperature, etc is nice to have.
https://github.com/dleemiller/nopilot.nvim