ialacol
ggml
Our great sponsors
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ialacol
-
Cloud Native Workflow for *Private* AI Apps
# This is the configuration file for DevSpace # # devspace use namespace private-ai # suggest to use a namespace instead of the default name space # devspace deploy # deploy the skeleton of the app and the dependencies (ialacol) # devspace dev # start syncing files to the container # devspace purge # to clean up version: v2beta1 deployments: # This are the manifest our private app deployment # The app will be in "sleep mode" after `devspace deploy`, and start when we start # syncing files to the container by `devspace dev` private-ai-app: helm: chart: # We are deploying the so-called Component Chart: https://devspace.sh/component-chart/docs name: component-chart repo: https://charts.devspace.sh values: containers: - image: ghcr.io/loft-sh/devspace-containers/python:3-alpine command: - "sleep" args: - "99999" service: ports: - port: 8000 labels: app.kubernetes.io/name: private-ai-app ialacol: helm: # the backend for the AI app, we are using ialacol https://github.com/chenhunghan/ialacol/ chart: name: ialacol repo: https://chenhunghan.github.io/ialacol # overriding values.yaml of ialacol helm chart values: replicas: 1 deployment: image: quay.io/chenhunghan/ialacol:latest env: # We are using MPT-30B, which is the most sophisticated model at the moment # If you want to start with some small but mightym try orca-mini # DEFAULT_MODEL_HG_REPO_ID: TheBloke/orca_mini_3B-GGML # DEFAULT_MODEL_FILE: orca-mini-3b.ggmlv3.q4_0.bin # MPT-30B DEFAULT_MODEL_HG_REPO_ID: TheBloke/mpt-30B-GGML DEFAULT_MODEL_FILE: mpt-30b.ggmlv0.q4_1.bin DEFAULT_MODEL_META: "" # Request more resource if needed resources: {} # pvc for storing the cache cache: persistence: size: 5Gi accessModes: - ReadWriteOnce storageClass: ~ cacheMountPath: /app/cache # pvc for storing the models model: persistence: size: 20Gi accessModes: - ReadWriteOnce storageClass: ~ modelMountPath: /app/models service: type: ClusterIP port: 8000 annotations: {} # You might want to use the following to select a node with more CPU and memory # for MPT-30B, we need at least 32GB of memory nodeSelector: {} tolerations: [] affinity: {}
-
Offline AI ðĪ on Github Actions ð
ââïļð°
You might be wondering why running Kubernetes is necessary for this project. This article was actually created during the development of a testing CI for the OSS project ialacol. The goal was to have a basic smoke test that verifies the Helm charts and ensures the endpoint returns a 200 status code. You can find the full source of the testing CI YAML here.
-
Containerized AI before Apocalypse ðģðĪ
We are deploying a Helm release orca-mini-3b using Helm chart ialacol
- Deploy private AI to cluster
ggml
-
LLMs on your local Computer (Part 1)
git clone https://github.com/ggerganov/ggml cd ggml mkdir build cd build cmake .. make -j4 gpt-j ../examples/gpt-j/download-ggml-model.sh 6B
-
GGUF, the Long Way Around
Cool. I was just learning about GGUF by creating my own parser for it based on the spec https://github.com/ggerganov/ggml/blob/master/docs/gguf.md (for educational purposes)
-
Ask HN: People who switched from GPT to their own models. How was it?
If you don't care about the details of how those model servers work, then something that abstracts out the whole process like LM Studio or Ollama is all you need.
However, if you want to get into the weeds of how this actually works, I recommend you look up model quantization and some libraries like ggml[1] that actually do that for you.
[1] https://github.com/ggerganov/ggml
- GGUF File Format
-
Google just shipped libggml from llama-cpp into its Android AICore
Because the library is called ggml, but it supports gguf.
-
Q-Transformer
Apparently this guy like a bunch of others like https://github.com/ggerganov/ggml are implementing transformers from papers for people that want them. Pretty cool.
-
[P] Inference Vision Transformer (ViT) in plain C/C++ with ggml
You can access it here: https://github.com/staghado/vit.cpp It has been added to the ggml library on GitHub: https://github.com/ggerganov/ggml
-
Falcon 180B Released
https://github.com/ggerganov/ggml
One note is that prompt ingestion is extremely slow on CPU compared to GPU. So short prompts are fine (as tokens can be streamed once the prompt is ingested), but long prompts feel extremely sluggish.
-
Stable Diffusion in pure C/C++
I did a quick run under profiler and on my AVX2-laptop the slowest part (>50%) was matrix multiplication (sgemm).
In current version of GGML if OpenBLAS is enabled, they convert matrices to FP32 before running sgemm.
If OpenBLAS is disabled, on AVX2 plaftorm they convert FP16 to FP32 on every FMA operation, which even worse (due to repetition). After that, both ggml_vec_dot_f16 and ggml_vec_dot_f32 took first place in profiler.
Source: https://github.com/ggerganov/ggml/blob/master/src/ggml.c#L10...
-
Accessing Llama 2 from the command-line with the LLM-replicate plugin
For those getting started, the easiest one click installer I've used is Nomic.ai's gpt4all: https://gpt4all.io/
This runs with a simple GUI on Windows/Mac/Linux, leverages a fork of llama.cpp on the backend and supports GPU acceleration, and LLaMA, Falcon, MPT, and GPT-J models. It also has API/CLI bindings.
I just saw a slick new tool https://ollama.ai/ that will let you install a llama2-7b with a single `ollama run llama2` command that has a very simple 1-click installer for Apple Silicon Mac (but need to build from source for anything else atm). It looks like it only supports llamas OOTB but it also seems to use llama.cpp (via Go adapter) on the backend - it seemed to be CPU-only on my MBA, but I didn't poke too much and it's brand new, so we'll see.
For anyone on HN, they should probably be looking at https://github.com/ggerganov/llama.cpp and https://github.com/ggerganov/ggml directly. If you have a high-end Nvidia consumer card (3090/4090) I'd highly recommend looking into https://github.com/turboderp/exllama
For those generally confused, the r/LocalLLaMA wiki is a good place to start: https://www.reddit.com/r/LocalLLaMA/wiki/guide/
I've also been porting my own notes into a single location that tracks models, evals, and has guides focused on local models: https://llm-tracker.info/
What are some alternatives?
langstream - LangStream. Event-Driven Developer Platform for Building and Running LLM AI Apps. Powered by Kubernetes and Kafka.
llama.cpp - LLM inference in C/C++
dify - Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
Pontus - Open Source Privacy Layer
alpaca-lora - Instruct-tune LLaMA on consumer hardware
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llm - An ecosystem of Rust libraries for working with large language models
StableLM - StableLM: Stability AI Language Models
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)