serve
swiss_army_llama
serve | swiss_army_llama | |
---|---|---|
11 | 11 | |
3,961 | 867 | |
0.8% | - | |
9.5 | 8.8 | |
8 days ago | 30 days ago | |
Java | Python | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
serve
-
Show Show HN: Llama2 Embeddings FastAPI Server
What's wrong with just using Torchserve[1]? We've been using it to serve embedding models in production.
[1] https://pytorch.org/serve/
-
How to leverage a local LLM for a client?
Looks like you are already up to speed loading LLaMa models which is great. Assuming this is Hugging Face PyTorch checkpoint, I think it should be possible to spin up a TorchServe instance which has in-built support for API access and HF Transformers. Since scale and latency aren’t a big concern for you, this should be good enough start.
- Is there a course that teaches you how to make an API with a trained model?
-
Pytorch eating memory on every api call
You could split the service in two, flask for the web part and a service to serve the model, I haven't used it but there is https://pytorch.org/serve/
-
Google Kubernetes Engine : Unable to access ports exposed on external IP
I'm attempting to set up inference for a torchserve container, and it's really tough to figure out what's not allowing me to connect to my network with the ports that I'm trying to expose. I'm using Google Kubernetes Engine and Helm via tweaking one of the tutorials at [torchserve](github.com/pytorch/serve). Specifically, it's the GKE tutorial [here](https://github.com/pytorch/serve/tree/master/kubernetes).
-
BetterTransformer: PyTorch-native free-lunch speedups for Transformer-based models
I did a Space to showcase a bit the speedups we can have in a end-to-end case with TorchServe to deploy the model on a cloud instance (AWS EC2 g4dn, using one T4 GPU): https://huggingface.co/spaces/fxmarty/bettertransformer-demo
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 2), I am aware of a few options. Triton inference server is an obvious one as is the ‘transformer-deploy’ version from LDS. My only reservation here is that they require the model compilation or are architecture specific. I am aware of others like Bento, Ray serving and TorchServe. Ideally I would have something that allows any (PyTorch model) to be used without the extra compilation effort (or at least optionally) and has some convenience things like ease of use, easy to deploy, easy to host multiple models and can perform some dynamic batching. Anyway, I am really interested to hear people's experience here as I know there are now quite a few options! Any help is appreciated! Disclaimer - I have no affiliation or are connected in any way with the libraries or companies listed here. These are just the ones I know of. Thanks in advance.
-
how to integrate a deep learning model into a Django webapp!?
If you built the model using pytorch or tensorflow, I'd suggest using torchserve or TF serving to serve the model as its own "microservice," then query it from your django app. Among other things, it will make retraining and updating your model a lot easier.
- Choose JavaScript 🧠
-
Popular Machine Learning Deployment Tools
GitHub
swiss_army_llama
-
Ask HN: Cheapest way to run local LLMs?
Depends what you mean by "local". If you mean in your own home, then there isn't a particularly cheap way unless you have a decent spare machine. If you mean "I get to control everything myself" then you can rent a cheap VPS on a value host like Contabo (you can get 8cores, 30gb of ram, and 1tb SSD on Ubuntu 22.04 for something like $35/month-- just stick the to US data centers).
Then if you want something that is extremely quick and easy to set up and provides a convenient REST api for completions/embeddings with some other nice features, you might want to check out my project here:
https://github.com/Dicklesworthstone/swiss_army_llama
Especially if you use Docker to set it up, you can go from a brand new box to a working setup in under 20 minutes and then access it via the Swagger page from any browser.
-
What's the difference between LangChain, llama indexand others like autollm?
I found all of them to be quite bloated and annoying to use directly, which is why I made my own FastAPI based one, Swiss Army Llama. I’m obviously biased, but I far prefer it:
https://github.com/Dicklesworthstone/swiss_army_llama
- Show HN: Swiss Army Llama – A Versatile, FastAPI-Based Multitool for Local LLMs
-
Show HN: Swiss Army Llama
I just added a very cool feature that lets you supply a sample JSON file and it will automatically generate a BNF grammar for it. You can also supply a pydantic data model description and it will generate the corresponding JSON BNF for you:
https://github.com/Dicklesworthstone/swiss_army_llama/blob/m...
And then you can add that grammar file and it will validate it with this:
https://github.com/Dicklesworthstone/swiss_army_llama/blob/5...
-
Show HN: Fast Vector Similarity Using Rust and Python
Cool, I also made a similar kind of tool recently that I also shared on HN a couple weeks ago. You might find it useful for generating and managing LLM embeddings locally:
https://github.com/Dicklesworthstone/llama_embeddings_fastap...
-
Show Show HN: Llama2 Embeddings FastAPI Server
Thanks for pointing out those models. I see from a quick Huggingface search that the bge model is available in GGML format. You can trivially add new GGML format models to the code by simply adding the direct download link to this line:
https://github.com/Dicklesworthstone/llama_embeddings_fastap...
So to add the base bge model, you could just add this URL to the list:
https://huggingface.co/maikaarda/bge-base-en-ggml/resolve/ma...
I will add that as an additional default.
- Llama2 Embeddings FastAPI Service
- Show HN: LLama2 Embeddings API Service Made with FastAPI
What are some alternatives?
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
llama_embeddings_fastap
serving - A flexible, high-performance serving system for machine learning models
openembeddings - Self-hostable pay for what you use embedding server for bge-large-en and arbitrary embedding models using crypto
JavaScriptClassifier - [Moved to: https://github.com/JonathanSum/JavaScriptClassifier]
np-sims - numpy ufuncs for vector similarity
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
simsimd
kernl - Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
rocketrosti - Chatbot, LLM companion and data retrieval framework
deepsparse - Sparsity-aware deep learning inference runtime for CPUs
DoctorGPT - 💻📚💡 DoctorGPT provides advanced LLM prompting for PDFs and webpages.