GPTQ-for-SantaCoder
4 bits quantization of SantaCoder using GPTQ (by mayank31398)
oasis
Local LLaMAs/Models in VSCode (by ChuloAI)
GPTQ-for-SantaCoder | oasis | |
---|---|---|
5 | 4 | |
54 | 51 | |
- | - | |
8.2 | 7.7 | |
about 1 year ago | about 1 year ago | |
Python | Python | |
- | - |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GPTQ-for-SantaCoder
Posts with mentions or reviews of GPTQ-for-SantaCoder.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-21.
-
What coding llm is the best?
This is on my list of projects to explore but haven't made it here yet: https://github.com/mayank31398/GPTQ-for-SantaCoder
-
Is there such a thing as local Llamas integrated into VSCode?
GPTQ-for-SantaCoder 4bit quantization for SantaCoder
- [Local Llama] Comment exécuter Starcoder-GPTQ-4BIT-128G?
-
How to run starcoder-GPTQ-4bit-128g?
It says use https://github.com/mayank31398/GPTQ-for-SantaCoder to run it, but when I follow those instructions, I always get random errors or it just tries to re-download the original model files.
-
Is it possible to run the 4-bit quantized StarCoder model in Oobabooga?
Is this because Oobabooga only works with 4-bit quantization of LLaMA, OPT, and GPT-J models? I noticed the guy that did the 4-bit quantization points to a pull of GPTQ where they made their own converter? https://github.com/mayank31398/GPTQ-for-SantaCoder
oasis
Posts with mentions or reviews of oasis.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-20.
- Wrote an open source VSCode plugin to generate docstrings, using a self-hosted server
-
Generating docstrings with Salesforce Codegen and Microsoft Guidance, inside VSCode
If you're curious / want to try it yourself, do check my GitHub repository: https://github.com/paolorechia/oasis
-
Is there such a thing as local Llamas integrated into VSCode?
oasis local LLaMA models in VSCode
What are some alternatives?
When comparing GPTQ-for-SantaCoder and oasis you can also consider the following projects:
locai - Connect to Kobold API through VS Code
developer - the first library to let you embed a developer agent in your own app!
refact - WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding
autodoc - Experimental toolkit for auto-generating codebase documentation using LLMs
starcoder.cpp - C++ implementation for 💫StarCoder