vit.cpp
bark.cpp
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vit.cpp
-
I've open sourced my Flutter plugin to run on-device LLMs on any platform. TestFlight builds available now.
And more stuff I’m often checking back on: - https://github.com/staghado/vit.cpp - https://github.com/serp-ai/bark-with-voice-clone - https://github.com/leejet/stable-diffusion.cpp (generate images) - etc … there’s too much fun stuff out there. Wish I had more free time haha.
-
[P] Inference Vision Transformer (ViT) in plain C/C++ with ggml
You can access it here: https://github.com/staghado/vit.cpp It has been added to the ggml library on GitHub: https://github.com/ggerganov/ggml
bark.cpp
- Show HN: I ported Suno AI's Bark model in C for fast realistic audio generation
- Bark.cpp: Port of Suno AI's Bark in C/C++ for fast inference
- I've open sourced my Flutter plugin to run on-device LLMs on any platform. TestFlight builds available now.
-
Meta's Segment Anything written with C++ / GGML
Another GGML model port that I'm pretty excited about is https://github.com/PABannier/bark.cpp.
The Bark python model is very compute intensive and require a powerful GPU to get bearable inference speed. I really hope that bark.cpp with GPU/Metal support and quanticized model can bring useful inference speed on a laptop in the near future.
What are some alternatives?
stable-diffusion.cpp - Stable Diffusion in pure C/C++
sam.cpp
minigpt4.cpp - Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)
Queryable - Run OpenAI's CLIP model on iOS to search photos.
EdgeML - This repository provides code for machine learning algorithms for edge devices developed at Microsoft Research India.
aub.ai - AubAI brings you on-device gen-AI capabilities, including offline text generation and more, directly within your app.
Daisykit - Daisykit is an easy AI toolkit with face mask detection, pose detection, background matting, barcode detection, and more. With Daisykit, you don't need AI knowledge to build AI software.
llm - An ecosystem of Rust libraries for working with large language models
tokay-lite-sdk - Tokay Lite: Low Power Edge AI Camera SDK
StyleTTS2 - StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
frugally-deep - A lightweight header-only library for using Keras (TensorFlow) models in C++.