-
continue
⏩ Continue enables you to create your own AI code assistant inside your IDE. Keep your developers in flow with open-source VS Code and JetBrains extensions
-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
-
-
ChatGPT.nvim
ChatGPT Neovim Plugin: Effortless Natural Language Generation with OpenAI's ChatGPT API
Hi HN,
Code Llama was released, but we noticed a ton of questions in the main thread about how/where to use it — not just from an API or the terminal, but in your own codebase as a drop-in replacement for Copilot Chat. Without this, developers don't get much utility from the model.
This concern is also important because benchmarks like HumanEval don't perfectly reflect the quality of responses. There's likely to be a flurry of improvements to coding models in the coming months, and rather than relying on the benchmarks to evaluate them, the community will get better feedback from people actually using the models. This means real usage in real, everyday workflows.
We've worked to make this possible with Continue (https://github.com/continuedev/continue) and want to hear what you find to be the real capabilities of Code Llama. Is it on-par with GPT-4, does it require fine-tuning, or does it excel at certain tasks?
If you’d like to try Code Llama with Continue, it only takes a few steps to set up (https://continue.dev/docs/walkthroughs/codellama), either locally with Ollama, or through TogetherAI or Replicate's APIs.
I use copilot in neovim[1]. It was remarkably simple to get installed. Highly recommend
[1]: https://github.com/github/copilot.vim
You can cobble together an OpenAI-esque server locally with llama.cpp: https://github.com/ggerganov/llama.cpp/issues/2766
[UniteAI](https://github.com/freckletonj/uniteai) I think fits the bill for you.
This is my project, where the goal is to Unite your AI-stack inside your editor (so, Speech-to-text, Local LLMs, Chat GPT, Retrieval Augmented Gen, etc).
It's built atop a Language Server, so, while no one has made an IntelliJ client yet, it's simple to. I'll help you do it if you make a GH Issue!
need to mod https://github.com/jackmort/chatgpt.nvim to use copilot or ollama