llamazoo
llama.go
llamazoo | llama.go | |
---|---|---|
2 | 12 | |
57 | 1,165 | |
- | - | |
10.0 | 8.2 | |
5 months ago | 5 months ago | |
C | Go | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llamazoo
-
Gotzmann LLM Score
Parameters for all tests (done with llama.cpp thru LLaMAZoo):
-
Have to abandon my (almost) finished LLaMA-API-Inference server. If anybody finds it useful and wants to continue, the repo is yours. :)
Looks like exactly same idea I'm doing right now with LLaMAZoo: https://github.com/gotzmann/llamazoo
llama.go
-
Understanding GPT Tokenizers
You might reuse simple LLaMA tokenizer right in your Go code, look there:
https://github.com/gotzmann/llama.go/blob/8cc54ca81e6bfbce25...
-
April 2023
llama.go is like llama.cpp in pure Golang (https://github.com/gotzmann/llama.go)
- llama.go v1.4 - introduces Rest API for your GPT services
- [Golang] Llama.go - Meta's Llama GPT Inférence dans Pure Golang
- LLaMA.go v1.4: now with scalable REST API exposing local GPT model
- Local LLaMA REST API with llama.go v1.4
- LLaMA.go v1.4 - introducing REST API for building your own GPT services
-
MiniGPT-4
I'm developing framework [1] in Golang with this goal in mind :) It successfully runs relatively big LLM right now, and diffusion models will be the next step
[1] https://github.com/gotzmann/llama.go/
- gotzmann/llama.go: llama.go is like llama.cpp in pure Golang!
- Show HN: Llama.go – port of llama.cpp to pure Go
What are some alternatives?
accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Flowise - Drag & drop UI to build your customized LLM flow
llama.py - Python bindings to llama.cpp
gpt4all.unity - Bindings of gpt4all language models for Unity3d running on your local machine
TALIS - Simple and fast server for GPTQ-quantized LLaMA inference
nn-zero-to-hero - Neural Networks: Zero to Hero
llama-go - Port of Facebook's LLaMA (Large Language Model Meta AI) in Golang with embedded C/C++
tokenizer - Pure Go implementation of OpenAI's tiktoken tokenizer
InternGPT - InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
LLamaStack - ASP.NET Core Web, WebApi & WPF implementations for LLama.cpp & LLamaSharp
InternChat - InternGPT / InternChat allows you to interact with ChatGPT by clicking, dragging and drawing using a pointing device. [Moved to: https://github.com/OpenGVLab/InternGPT]
langchain-alpaca - Run Alpaca LLM in LangChain