text-embeddings-inference
fastsdcpu
text-embeddings-inference | fastsdcpu | |
---|---|---|
3 | 6 | |
2,146 | 969 | |
6.6% | - | |
8.8 | 9.5 | |
6 days ago | 12 days ago | |
Rust | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
text-embeddings-inference
-
HuggingFace text-generation-inference is reverting to Apache 2.0 License
Worth noting that this also impacts the great https://github.com/huggingface/text-embeddings-inference, which allows anyone to run state of the art embeddings with great performance.
- FLaNK Stack Weekly for 30 Oct 2023
- Fast inference for text models using Rust
fastsdcpu
-
FastSD CPU beta 21 - SDXL Turbo OpenVINO support (2.5 seconds on CPU)
Release : https://github.com/rupeshs/fastsdcpu/releases/tag/v1.0.0-beta.21
-
Krita AI Diffusion
Too bad I don't have the Hardware to run it. Anyone had success with stable diffusion on Steam Deck ? The only thing that works for me is https://github.com/rupeshs/fastsdcpu , but it takes 1m per 512x512 image and is LCM
-
$95 AMD CPU Becomes 16GB GPU to Run AI Software
> one minute and 50 seconds to generate a 512 x 512-pixel image with the default setting of 50 steps
A little over 2s/iter? That is... Not great.
It is slower than CPU diffusion: https://github.com/rupeshs/fastsdcpu
Stable Diffusion in particular doesn't need much VRAM anyway. I get that many people are stuck on lower end computers, but ~4GB is not an unreasonable requirement.
- FLaNK Stack Weekly for 30 Oct 2023
- Generate images in one second on your Mac using a latent consistency model
- rupeshs/fastsdcpu: Fast stable diffusion CPU
What are some alternatives?
llama-node - Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
Deep-Learning-Ultra - Open source Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in PyTorch, OpenCV (compiled for GPU), TensorFlow 2 for GPU, PyG and NVIDIA RAPIDS
smartgpt - A program that provides LLMs with the ability to complete complex tasks using plugins.
safetensors_util - Utility for Safetensors Files
auto-rust - auto-rust is an experimental project that automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing procedural macros.
sd-gui - Clean and simple Stable Diffusion GUI for macOS, Windows, and Linux
floneum - Instant, controllable, local pre-trained AI models in Rust
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
openv0 - AI generated UI components
latent-consistency-model - Run Latent Consistency Models on your Mac
CSGHub - CSGHub is an opensource large model assets platform just like on-premise huggingface which helps to manage datasets, model files, codes and more. CSGHub是一个开源、可信的大模型资产管理平台,可帮助用户治理LLM和LLM应用生命周期中涉及到的资产(数据集、模型文件、代码等)。CSGHub提供类似私有化的Huggingface功能,以类似OpenStack Glance管理虚拟机镜像、Harbor管理容器镜像以及Sonatype Nexus管理制品的方式,实现对LLM资产的管理。欢迎关注反馈和Star⭐️
stablediffusion-infinity - Outpainting with Stable Diffusion on an infinite canvas