llm-ls
continue
llm-ls | continue | |
---|---|---|
2 | 18 | |
471 | 11,509 | |
11.7% | 16.1% | |
8.2 | 10.0 | |
2 months ago | 6 days ago | |
Rust | TypeScript | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llm-ls
-
Continue will generate, refactor, and explain entire sections of code
> I'd have expected that the main lever the product has in being better than others is having a custom model that understands code edits much more than others.
True, but this is not something this particular product would solve. There are already models specifically trained to work on code. What's appealing to me is the flexibility of being able to choose which one to use, rather than my workflow being tied to a specific product or company.
> the IDE integration seems to be the "easy bit"
I admittedly haven't researched this much, but this is not currently the case. There is no generic API for LLMs that IDEs can plug into, so all plugins must target a specific model. We ultimately need an equivalent of an LSP server for LLMs, and while such a project exists[1], it looks to be in its infancy, as expected.
[1]: https://github.com/huggingface/llm-ls
-
LocalPilot: Open-source GitHub Copilot on your MacBook
Okay, I actually got local co-pilot set up. You will need these 4 things.
1) CodeLlama 13B or another FIM model https://huggingface.co/codellama/CodeLlama-13b-hf. You want "Fill in Middle" models because you're looking at context on both sides of your cursor.
2) HuggingFace llm-ls https://github.com/huggingface/llm-ls A large language mode Language Server (is this making sense yet)
3) HuggingFace inference framework. https://github.com/huggingface/text-generation-inference At least when I tested you couldn't use something like llama.cpp or exllama with the llm-ls, so you need to break out the heavy duty badboy HuggingFace inference server. Just config and run. Now config and run llm-ls.
4) Okay, I mean you need an editor. I just tried nvim, and this was a few weeks ago, so there may be better support. My expereicen was that is was full honest to god copilot. The CodeLlama models are known to be quite good for its size. The FIM part is great. Boilerplace works so much easier with the surrounding context. I'd like to see more models released that can work this way.
continue
-
Ask HN: Who is hiring? (February 2024)
Continue (YC S23) | Founding Engineer | ONSITE | Full-time | San Francisco | $130-$170K + 1-2% Equity
At Continue, we are on a mission to make building software feel like making music. We are creating the open-source autopilot for VS Code and JetBrains——the easiest way to code with any LLM (https://github.com/continuedev/continue).
You are likely a good fit if you
- have founded or want to found your own startup one day
- have experience with frontend, backend, ML technologies
- are enthusiastic about AI/LLMs, open source, developer tools
- get excited about supporting users and helping customers
- want to work in-person in SF the majority of the time
More info: https://www.ycombinator.com/companies/continue/jobs/smcxRnM-...
-
Meta AI releases Code Llama 70B
Continue doesn’t support tab completion like Copilot yet.
A pull/merge request is being worked on: https://github.com/continuedev/continue/pull/758
-
Show HN: Open-source, privacy oriented alternative to GitHub Copilot chat
Good job on the project, but it's unfortunately named. A privy also refers to a latrine.
Given that this project was started well after Continue.dev, I think it would be useful to include an FAQ or a comparison table on what exactly makes this project different.
https://github.com/continuedev/continue
- Continue will generate, refactor, and explain entire sections of code
-
VSC Continue.dev with own Rest API
In this Continue.dev file https://github.com/continuedev/continue/blob/preview/server/continuedev/libs/llm/llamacpp.py the request to llama.cpp is implemented.
- What is your motive for running open-source models, instead of just using a ready-made solution like GPT-4?
-
Ask HN: Who is hiring? (December 2023)
Continue | Founding Engineer | ONSITE | Full-time | San Francisco | $130-$170K + 1-2% Equity
At Continue, we are on a mission to make building software feel like making music. We are creating the open-source autopilot for software development—an IDE extension that brings the power of ChatGPT to VS Code and JetBrains (https://github.com/continuedev/continue).
You are likely a good fit if you
- have founded or want to found your own startup one day
- have experience with frontend, backend, and ML technologies
- are enthusiastic about AI/LLMs, open source, developer tools
- get excited about supporting users and helping customers
- want to work in-person in SF the majority of the time
More info: https://www.ycombinator.com/companies/continue/jobs/smcxRnM-...
-
How helpful are LLMs with MATLAB?
Original source: https://github.com/continuedev/continue/tree/main/docs/docs/languages/matlab.md
-
How are people using open source LLMs in production apps?
We are seeing developers deploy open-source LLMs for their teams to use while coding internally, which each developer then uses with Continue
-
Show HN: Continue – open-source coding autopilot, now in JetBrains
Hi HN!
Since launching Continue two months ago (https://news.ycombinator.com/item?id=36882146), we've received amazing feedback, added features, and greatly improved reliability. But one of the biggest things we heard was the desire for a JetBrains extension. My co-founder Ty and I are super excited to share that we've released an extension for PyCharm, Intellij, WebStorm, and most other JetBrains IDEs - ready for alpha users at https://plugins.jetbrains.com/plugin/22707-continue.
Perhaps the most exciting part is that this effort was kickstarted and in great part developed by a community contributor! If you're curious what it took to make this happen, check out the PR here (https://github.com/continuedev/continue/pull/457). We hope to eventually support every IDE, so we made adding a new extension as easy as implementing a single class. If you're curious why this is possible, you can read more about the Continue Server and the architectural decisions we made here: https://blog.continue.dev/how-we-made-continue-ide-agnostic.
What are some alternatives?
OpenAI-sublime-text - Sublime Text OpenAI completion plugin with GPT-4 support!
llama-cpp-python - Python bindings for llama.cpp
text-generation-inference - Large Language Model Text Generation Inference
aider - aider is AI pair programming in your terminal
cody - AI that knows your entire codebase
vscode-flexigpt - FlexiGPT plugin for VSCode. Interact with AI models as a power user
localpilot
ChatGPT.nvim - ChatGPT Neovim Plugin: Effortless Natural Language Generation with OpenAI's ChatGPT API
refact - WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding
prompt - 🥝 A command line application to interact with OpenAI's ChatGPT API.
openvsx - An open-source registry for VS Code extensions
uniteai - Your AI Stack in Your Editor