flake
TokenHawk
flake | TokenHawk | |
---|---|---|
5 | 1 | |
593 | 98 | |
3.9% | - | |
4.4 | 10.0 | |
7 days ago | 11 months ago | |
Nix | C++ | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
flake
- Running AI Models on NixOS
- Nixified.Ai Release 2
-
Llama.cpp: Full CUDA GPU Acceleration
> Ideally, there's Nix (and poetry2nix) that could take care of everything, but only a few folks write Flakes for their projects.
Relevant to "AI, Python, setting up is hard ... nix", there's stuff like:
https://github.com/nixified-ai/flake
-
Can you substitute conda with Nix for Data Science and ML/AI?
However, I would reach out to the Nixified.ai folks about it, because I can see that the invoke.ai build script mentions pytorch and several other hard-to-install packages (albeit not detectron).
- A Nix flake for many AI projects
TokenHawk
What are some alternatives?
nonguix - Nonguix mirror – pull requests ignored, please use upstream for that
llama_cpp.rb - llama_cpp provides Ruby bindings for llama.cpp
guix-nonfree - Unofficial collection of packages that are not going to be accepted in to guix
llama.cpp - LLM inference in C/C++
lit-llama - Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
serving - A flexible, high-performance serving system for machine learning models
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
darknet - Convolutional Neural Networks
guix-nonfree
llama.cpp - Port of Facebook's LLaMA model in C/C++