ggllm.cpp
torcheval
ggllm.cpp | torcheval | |
---|---|---|
8 | 3 | |
243 | 196 | |
- | 5.1% | |
9.5 | 7.5 | |
4 months ago | 19 days ago | |
C | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ggllm.cpp
-
Is there a way to use a quantized Falcon 40B with SillyTavern (on Apple Silicon)
I'd like to try https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-40B-GGML with SillyTavern (running on Apple Silicon). The only way I've found to run Falcon 40B quantized on Apple Silicon is with https://github.com/cmp-nct/ggllm.cpp but I haven't figured out any way to get SillyTavern to use that as a local model. Does anyone know of a way to get this working?
-
How Is LLaMa.cpp Possible?
It doesn't support Falcon right now, but there's a fork that does (https://github.com/cmp-nct/ggllm.cpp/).
- Alfred-40B, an OSS RLHF version of Falcon40B
-
Falcon ggml/ggcc with langchain
To load falcon models with the new file format ggcc wich is a new file format similar to ggml, I'm using this tool: https://github.com/cmp-nct/ggllm.cpp Wich is a fork from : https://github.com/ggerganov/llama.cpp
-
Show HN: Danswer – open-source question answering across all your docs
The GGLLM fork seems to be the leading falcon winner for now [1]
It comes with its own variant of the GGML sub format "ggcv1" but there's quants available on HF [2]
Although if you have a GPU I'd go with the newly released AWQ quantization instead [3] the performance is better.
(I may or may not have a mild local LLM addiction - and video cards cost more then drugs)
[1] https://github.com/cmp-nct/ggllm.cpp
[2] https://huggingface.co/TheBloke/falcon-7b-instruct-GGML
[3] https://huggingface.co/abhinavkulkarni/tiiuae-falcon-7b-inst...
-
ChatGPT loses users for first time, shaking faith in AI revolution
For base tooling, things like:
https://huggingface.co/ (finding models and downloading them)
https://github.com/ggerganov/llama.cpp (llama)
https://github.com/cmp-nct/ggllm.cpp (falcon)
For interactive work (art/chat/research/playing around), things like:
https://github.com/oobabooga/text-generation-webui/blob/main... (llama) (Also - they just added a decent chat server built into llama.cpp the project)
https://github.com/invoke-ai/InvokeAI (stable-diffusion)
Plus a bunch of hacked together scripts.
Some example models (I'm linking to quantized versions that someone else has made, but the tooling is in the above repos to create them from the published fp16 models)
https://huggingface.co/TheBloke/llama-65B-GGML
https://huggingface.co/TheBloke/falcon-40b-instruct-GPTQ
https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored...
etc. Hugging face has quite a number, although some require filling out forms for the base models for tuning/training.
- Falcon LLM – A 40B Model
-
Run machine learning on 7900XT/7900XTX using ROCm 5.5.0 on Ubuntu 22.04
I did another test running LLM model (gpt4all-falcon) quantized to Q5_0 and Q5_1 to AMD GPU (https://huggingface.co/nomic-ai/gpt4all-falcon). I used this awesome project (forked from https://github.com/ggerganov/llama.cpp to https://github.com/cmp-nct/ggllm.cpp). I hipified the CUDA file into HIP code. and made some modifications on it (PR: https://github.com/cmp-nct/ggllm.cpp/pull/3). Checkout https://huggingface.co/nomic-ai/gpt4all-falcon
torcheval
-
How Is LLaMa.cpp Possible?
Reading this could make people believe it is computed from the probability distribution of the model alone.
To be clearer, it is the exponent of the average negative log probability that the model gives to the real tokens of a sample text[0]. Roughly, it relates to how strongly the model can predict the sample text. A perfect model would have zero perplexity; a random model has a perplexity equal to the number of possible tokens; the worst model has infinite perplexity.
[0]: https://github.com/pytorch/torcheval/blob/3faf19c060b8a7c074...
-
What skills are necessary to understand/be able to make meaningful contributions to PyTorch?
Shameless plug, my team works on torcheval and torchtnt. Neither of them are core pytorch, but if you're looking to help build out tooling for metric evaluation or training frameworks, both libraries are pretty new with very low hanging fruit.
-
[D] AMA: The Stability AI Team
Hey I work on TorchEval let us know if we can be of any help here :)
What are some alternatives?
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
tnt - A lightweight library for PyTorch training tools and utilities
llama2.cs - Inference Llama 2 in one file of pure C#
llama.cpp - LLM inference in C/C++
curated-transformers - 🤖 A PyTorch library of curated Transformer models and their composable components
polyglot - Polyglot: Large Language Models of Well-balanced Competence in Multi-languages
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
GPTCache - Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
stable-diffusion-webui - Stable Diffusion web UI