EdgeGPT
turbopilot
EdgeGPT | turbopilot | |
---|---|---|
6 | 15 | |
107 | 3,839 | |
- | - | |
10.0 | 10.0 | |
8 months ago | 8 months ago | |
Python | C++ | |
- | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
EdgeGPT
-
Which Models Best for Programming?
Else chain your local model to the internet with the EdgeGPT extension
- AgentOoba v0.1 - better UI, better contextualization, the beginnings of langchain integration and tools
-
EdgeGPT Extension for Text Generation Webui - Easy Internet access
If you can open an issue on my page I can keep track of the errors easily, right now I have some exams but after them I'll try to fix the errors
turbopilot
- New version of Turbopilot released!
-
GGML for Falcoder7B, SantaCoder 1B, TinyStarCoder 160M
fyi https://github.com/ravenscroftj/turbopilot
-
April 2023
TurboPilot: self-hosted copilot clone which uses the library behind llama.cpp to run the 6 Billion Parameter Salesforce Codegen model in 4GiB of RAM. (https://github.com/ravenscroftj/turbopilot)
-
Which Models Best for Programming?
This repo has a potential
-
[D] What Repos/Tools Should We Pay Attention To?
Right now https://github.com/ggerganov/llama.cpp is the dominant back-end for querying models, but forks and alternatives like https://github.com/ravenscroftj/turbopilot keep popping up. Increasingly, models submitted to huggingface explicitly note in their READMEs that the model is not compatible with llama.cpp, and that a different back-end must be used.
-
newbie seeking impressive llama models, am i missing something?
There's turbopilot. I haven't tried it yet, but it looks promising.
- LocalAI: OpenAI compatible API to run LLM models locally on consumer grade hardware!
-
LLM specialized in programming ?
Turbopilot | open source LLM code completion engine and Copilot alternative
-
Locally running models like Chatgpt for Emacs?
This 6B parameters tool (based on README) could be runned with 4 Gb of RAM. https://github.com/ravenscroftj/turbopilot
-
What models and setup is good for generating code
there is an interesting link https://github.com/ravenscroftj/turbopilot/wiki/Converting-and-Quantizing-The-Models , just wondering if anyone have done this with 16b and put the weights somewhere
What are some alternatives?
AgentOoba - An autonomous AI agent extension for Oobabooga's web ui
tabby - Self-hosted AI coding assistant
EdgeGPT - Reverse engineered API of Microsoft's Bing Chat AI
fauxpilot - FauxPilot - an open-source alternative to GitHub Copilot server
ggml - Tensor library for machine learning
prompt-engineering - ChatGPT Prompt Engineering for Developers - deeplearning.ai
telegram-chatgpt-concierge-bot - Interact with OpenAI's ChatGPT via Telegram and Voice.
simpleAI - An easy way to host your own AI API and expose alternative models, while being compatible with "open" AI clients.
llm-apex-agents - Run Large Language Model "Agents" in Salesforce apex
Flowise - Drag & drop UI to build your customized LLM flow
web-llm - Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.
gpt-discord-bot - Example Discord bot written in Python that uses the completions API to have conversations with the `text-davinci-003` model, and the moderations API to filter the messages.