refact
lmdeploy
refact | lmdeploy | |
---|---|---|
34 | 3 | |
1,422 | 2,391 | |
3.3% | 12.6% | |
9.8 | 9.8 | |
4 days ago | 2 days ago | |
JavaScript | Python | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
refact
- RefactAI: Use best-in-class LLMs for coding in your IDE
-
Supercharge Your Dev Workflow: How Refact's AI-powered Code Completion Boosts Developer Productivity
With over 1.3k stars on GitHub, more than 40k downloads and installs on both VS Code and JetBrains IDEs, and more than 50 positive reviews, it is worth saying that Refact is part of the best product in the AI coding assistant market.
-
What do you use to run your models?
On vscode i sometimes use continue.dev and refact.ai just for fun and they are great!
-
AI Code assistant for about 50-70 users
Refact was made for this: https://github.com/smallcloudai/refact
- Free WebUI for Fine-Tuning and Self-Hosting Open-Source LLMs for Coding
-
LocalPilot: Open-source GitHub Copilot on your MacBook
You should check-out [refact.ai](https://github.com/smallcloudai/refact). It has both autocomplete and chat. It's in active development, with lots of new features coming soon (context search, fine-tuning for larger models, etc)
-
Replit's new AI Model now available on Hugging Face
I don’t recommend that, since that uses the cloud for the actual inference by default (and they provide no guidance for changing that).
I don’t consider cloud inference to count as getting it working “locally” as requested by the comment above yours.
Refact works nicely and works locally, but the challenge with any new model is making it be supported by the existing software: https://github.com/smallcloudai/refact/
- Refact.ai 1.0.0 Released
-
📝 🚀 Creating our first documentation from scratch using Astro and Refact AI coding assistant
Previously, we used Astro for our refact.ai website and wanted to stay within the Astro ecosystem for the documentation.
-
🤖We trained a small 1.6b code model and you can use it as a personal copilot in Refact for free🤖
Refact LLM can be easily integrated into existing developers workflows with an open-source docker container and VS Code and JetBrains plugins. With Refact's intuitive user interface, developers can utilize the model easily for a variety of coding tasks. Finetune is available in the self-hosting (docker) and Enterprise versions, making suggestions more relevant for your private codebase.
lmdeploy
-
AMD May Get Across the CUDA Moat
I wouldn’t say ROCm code is “slower”, per se, but in practice that’s how it presents. References:
https://github.com/InternLM/lmdeploy
https://github.com/vllm-project/vllm
https://github.com/OpenNMT/CTranslate2
You know what’s missing from all of these and many more like them? Support for ROCm. This is all before you get to the really wildly performant stuff like Triton Inference Server, FasterTransformer, TensorRT-LLM, etc.
ROCm is at the “get it to work stage” (see top comment, blog posts everywhere celebrating minor successes, etc). CUDA is at the “wring every last penny of performance out of this thing” stage.
In terms of hardware support, I think that one is obvious. The U in CUDA originally stood for unified. Look at the list of chips supported by Nvidia drivers and CUDA releases. Literally anything from at least the past 10 years that has Nvidia printed on the box will just run CUDA code.
One of my projects specifically targets Pascal up - when I thought even Pascal was a stretch. Cue my surprise when I got a report of someone casually firing it up on Maxwell when I was pretty certain there was no way it could work.
A Maxwell laptop chip. It also runs just as well on an H100.
THAT is hardware support.
-
Nvidia Introduces TensorRT-LLM for Accelerating LLM Inference on H100/A100 GPUs
vLLM has healthy competition. Not affiliated but try lmdeploy:
https://github.com/InternLM/lmdeploy
In my testing it’s significantly faster and more memory efficient than vLLM when configured with AWQ int4 and int8 KV cache.
If you look at the PRs, issues, etc you’ll see there are many more optimizations in the works. That said there are also PRs and issues for some of the lmdeploy tricks in vllm as well (AWQ, Triton Inference Server, etc).
I’m really excited to see where these projects go!
- Meta: Code Llama, an AI Tool for Coding
What are some alternatives?
tabby - Self-hosted AI coding assistant
vllm - A high-throughput and memory-efficient inference and serving engine for LLMs
fauxpilot - FauxPilot - an open-source alternative to GitHub Copilot server
llama.cpp - LLM inference in C/C++
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
llama-cpp-python - Python bindings for llama.cpp
CTranslate2 - Fast inference engine for Transformer models
developer - the first library to let you embed a developer agent in your own app!
smartcat
supervision - We write your reusable computer vision tools. 💜
seamless_communication - Foundational Models for State-of-the-Art Speech and Text Translation