-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
-
exllama
A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
4-bit. I've used this implementation: https://github.com/abacaj/mpt-30B-inference/tree/main
I found vLLM to work pretty well. Gives a nice boost. It doesn't support MPT yet although you can try to add it: https://github.com/vllm-project/vllm. There's exllama for running quantized models: https://github.com/turboderp/exllama. You can also try TGI: https://github.com/huggingface/text-generation-inference
I found vLLM to work pretty well. Gives a nice boost. It doesn't support MPT yet although you can try to add it: https://github.com/vllm-project/vllm. There's exllama for running quantized models: https://github.com/turboderp/exllama. You can also try TGI: https://github.com/huggingface/text-generation-inference
I found vLLM to work pretty well. Gives a nice boost. It doesn't support MPT yet although you can try to add it: https://github.com/vllm-project/vllm. There's exllama for running quantized models: https://github.com/turboderp/exllama. You can also try TGI: https://github.com/huggingface/text-generation-inference