text-generation-inference
safetensors
text-generation-inference | safetensors | |
---|---|---|
29 | 31 | |
7,881 | 2,442 | |
6.2% | 3.6% | |
9.6 | 8.2 | |
5 days ago | 8 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
text-generation-inference
- FLaNK AI-April 22, 2024
-
Zephyr 141B, a Mixtral 8x22B fine-tune, is now available in Hugging Chat
I wanted to write that TGI inference engine is not Open Source anymore, but they have reverted the license back to Apache 2.0 for the new version TGI v2.0: https://github.com/huggingface/text-generation-inference/rel...
Good news!
- Hugging Face reverts the license back to Apache 2.0
- HuggingFace text-generation-inference is reverting to Apache 2.0 License
- FLaNK Stack 05 Feb 2024
- Is there any open source app to load a model and expose API like OpenAI?
-
AI Code assistant for about 50-70 users
Setting up a server for multiple users is very different from setting up LLM for yourself. A safe bet would be to just use TGI, which supports continuous batching and is very easy to run via Docker on your server. https://github.com/huggingface/text-generation-inference
-
LocalPilot: Open-source GitHub Copilot on your MacBook
Okay, I actually got local co-pilot set up. You will need these 4 things.
1) CodeLlama 13B or another FIM model https://huggingface.co/codellama/CodeLlama-13b-hf. You want "Fill in Middle" models because you're looking at context on both sides of your cursor.
2) HuggingFace llm-ls https://github.com/huggingface/llm-ls A large language mode Language Server (is this making sense yet)
3) HuggingFace inference framework. https://github.com/huggingface/text-generation-inference At least when I tested you couldn't use something like llama.cpp or exllama with the llm-ls, so you need to break out the heavy duty badboy HuggingFace inference server. Just config and run. Now config and run llm-ls.
4) Okay, I mean you need an editor. I just tried nvim, and this was a few weeks ago, so there may be better support. My expereicen was that is was full honest to god copilot. The CodeLlama models are known to be quite good for its size. The FIM part is great. Boilerplace works so much easier with the surrounding context. I'd like to see more models released that can work this way.
-
Mistral 7B Paper on ArXiv
A simple microservice would be https://github.com/huggingface/text-generation-inference .
Works flawlessly in Docker on my Windows machine, which is extremely shocking.
-
best way to serve llama V2 (llama.cpp VS triton VS HF text generation inference)
I am wondering what is the best / most cost-efficient way to serve llama V2. - llama.cpp (is it production ready or just for playing around?) ? - Triton inference server ? - HF text generation inference ?
safetensors
-
Llamafile lets you distribute and run LLMs with a single file
The ML field is doing work in that area: https://github.com/huggingface/safetensors
-
Hugging Face raises $235M from investors including Salesforce and Nvidia
FYI the file format, safetensors, was proposed, developed and maintained by HF, and involved people from groups such as Eleuther and Stability for external security audits.
https://github.com/huggingface/safetensors https://huggingface.co/blog/safetensors-security-audit
-
I Made Stable Diffusion XL Smarter by Finetuning It on Bad AI-Generated Images
Thank you for note on this. I had not heard there were already trojan horse malware being slipped into tensor files as python scripts. Apparently torch pickle uses eval on the tensor file with no filter.
Heard surprisingly little commentary on this topic. The full explanation of how Safetensors are "Safe" can be found from the developer at: https://github.com/huggingface/safetensors/discussions/111
- Pickle safety in Python
-
What makes .safetensors files safe?
Here the developer goes into some detail about what kinds of protections .safetensor files have : https://github.com/huggingface/safetensors/discussions/111
-
Security PSA: huggingface models are code. not just data.
Use the safetensors format, which allows safe persistence and loading of models for common libraries - TensorFlow, PyTorch, JAX, etc. We went through external audits in the last few months (blog post). The current direction will be to have this as the default format.
- What's your favorite model. Right now I'm really enjoying dreamshaper.
- Lora, ggml, safetensors, hf, etc. Is there a glossary and guide on which model to choose?
-
Stability AI Launches the First of Its StableLM Suite of Language Models
I've been diving in lately and while it's not efficient, the only way to do manage is to create a new conda/mamba environment, or a custom Docker image for all the conflicting packages.
For safety and speed, you should prefer the safetensor format: https://huggingface.co/docs/safetensors/speed
If you know what you are doing you can do your own conversions: https://github.com/huggingface/safetensors or for safety, https://huggingface.co/spaces/diffusers/convert
-
CKPT to Safetensors
GitHub - huggingface/safetensors: Simple, safe way to store and distribute tensors
What are some alternatives?
llama-cpp-python - Python bindings for llama.cpp
stable-diffusion-webui - Stable Diffusion web UI
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
llama.cpp - LLM inference in C/C++
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
Safe-and-Stable-Ckpt2Safetensors-Conversion-Tool-GUI - Convert your Stable Diffusion checkpoints quickly and easily.
basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
Stable-Diffusion-Pickle-Scanner-GUI - Pickle Scanner GUI
vllm - A high-throughput and memory-efficient inference and serving engine for LLMs
stable-diffusion-webui-model-toolkit - A Multipurpose toolkit for managing, editing and creating models.