llama.go
turbopilot
llama.go | turbopilot | |
---|---|---|
12 | 15 | |
1,168 | 3,839 | |
- | - | |
8.2 | 10.0 | |
5 months ago | 8 months ago | |
Go | C++ | |
GNU General Public License v3.0 or later | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llama.go
-
Understanding GPT Tokenizers
You might reuse simple LLaMA tokenizer right in your Go code, look there:
https://github.com/gotzmann/llama.go/blob/8cc54ca81e6bfbce25...
-
April 2023
llama.go is like llama.cpp in pure Golang (https://github.com/gotzmann/llama.go)
- llama.go v1.4 - introduces Rest API for your GPT services
- [Golang] Llama.go - Meta's Llama GPT Inférence dans Pure Golang
- LLaMA.go v1.4: now with scalable REST API exposing local GPT model
- Local LLaMA REST API with llama.go v1.4
- LLaMA.go v1.4 - introducing REST API for building your own GPT services
-
MiniGPT-4
I'm developing framework [1] in Golang with this goal in mind :) It successfully runs relatively big LLM right now, and diffusion models will be the next step
[1] https://github.com/gotzmann/llama.go/
- gotzmann/llama.go: llama.go is like llama.cpp in pure Golang!
- Show HN: Llama.go – port of llama.cpp to pure Go
turbopilot
- New version of Turbopilot released!
-
GGML for Falcoder7B, SantaCoder 1B, TinyStarCoder 160M
fyi https://github.com/ravenscroftj/turbopilot
-
April 2023
TurboPilot: self-hosted copilot clone which uses the library behind llama.cpp to run the 6 Billion Parameter Salesforce Codegen model in 4GiB of RAM. (https://github.com/ravenscroftj/turbopilot)
-
Which Models Best for Programming?
This repo has a potential
-
[D] What Repos/Tools Should We Pay Attention To?
Right now https://github.com/ggerganov/llama.cpp is the dominant back-end for querying models, but forks and alternatives like https://github.com/ravenscroftj/turbopilot keep popping up. Increasingly, models submitted to huggingface explicitly note in their READMEs that the model is not compatible with llama.cpp, and that a different back-end must be used.
-
newbie seeking impressive llama models, am i missing something?
There's turbopilot. I haven't tried it yet, but it looks promising.
- LocalAI: OpenAI compatible API to run LLM models locally on consumer grade hardware!
-
LLM specialized in programming ?
Turbopilot | open source LLM code completion engine and Copilot alternative
-
Locally running models like Chatgpt for Emacs?
This 6B parameters tool (based on README) could be runned with 4 Gb of RAM. https://github.com/ravenscroftj/turbopilot
-
What models and setup is good for generating code
there is an interesting link https://github.com/ravenscroftj/turbopilot/wiki/Converting-and-Quantizing-The-Models , just wondering if anyone have done this with 16b and put the weights somewhere
What are some alternatives?
Flowise - Drag & drop UI to build your customized LLM flow
tabby - Self-hosted AI coding assistant
gpt4all.unity - Bindings of gpt4all language models for Unity3d running on your local machine
fauxpilot - FauxPilot - an open-source alternative to GitHub Copilot server
nn-zero-to-hero - Neural Networks: Zero to Hero
ggml - Tensor library for machine learning
tokenizer - Pure Go implementation of OpenAI's tiktoken tokenizer
prompt-engineering - ChatGPT Prompt Engineering for Developers - deeplearning.ai
LLamaStack - ASP.NET Core Web, WebApi & WPF implementations for LLama.cpp & LLamaSharp
telegram-chatgpt-concierge-bot - Interact with OpenAI's ChatGPT via Telegram and Voice.
langchain-alpaca - Run Alpaca LLM in LangChain
simpleAI - An easy way to host your own AI API and expose alternative models, while being compatible with "open" AI clients.