server
steampipe
server | steampipe | |
---|---|---|
24 | 146 | |
7,452 | 6,459 | |
3.9% | 1.9% | |
9.5 | 9.7 | |
4 days ago | 7 days ago | |
Python | Go | |
BSD 3-clause "New" or "Revised" License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
server
- FLaNK Weekly 08 Jan 2024
- Is there any open source app to load a model and expose API like OpenAI?
- "A matching Triton is not available"
-
best way to serve llama V2 (llama.cpp VS triton VS HF text generation inference)
I am wondering what is the best / most cost-efficient way to serve llama V2. - llama.cpp (is it production ready or just for playing around?) ? - Triton inference server ? - HF text generation inference ?
- Triton Inference Server - Backend
-
Single RTX 3080 or two RTX 3060s for deep learning inference?
For inference of CNNs, memory should really not be an issue. If it is a software engineering problem, not a hardware issue. FP16 or Int8 for weights is fine and weight size won’t increase due to the high resolution. And during inference memory used for hidden layer tensors can be reused as soon as the last consumer layer has been processed. You likely using something that is designed for training for inference and that blows up the memory requirement, or if you are using TensorRT or something like that, you need to be careful to avoid that every tasks loads their own copy of the library code into the GPU. Maybe look at https://github.com/triton-inference-server/server
-
Machine Learning Inference Server in Rust?
I am looking for something like [Triton Inference Server](https://github.com/triton-inference-server/server) or [TFX Serving](https://www.tensorflow.org/tfx/guide/serving), but in Rust. I came across [Orkon](https://github.com/vertexclique/orkhon) which seems to be dormant and a bunch of examples off of the [Awesome-Rust-MachineLearning](https://github.com/vaaaaanquish/Awesome-Rust-MachineLearning)
-
Multi-model serving options
You've already mentioned Seldon Core which is well worth looking at but if you're just after the raw multi-model serving aspect rather than a fully-fledged deployment framework you should maybe take a look at the individual inference servers: Triton Inference Server and MLServer both support multi-model serving for a wide variety of frameworks (and custom python models). MLServer might be a better option as it has an MLFlow runtime but only you will be able to decide that. There also might be other inference servers that do MMS that I'm not aware of.
-
I mean,.. we COULD just make our own lol
[1] https://docs.nvidia.com/launchpad/ai/chatbot/latest/chatbot-triton-overview.html[2] https://github.com/triton-inference-server/server[3] https://neptune.ai/blog/deploying-ml-models-on-gpu-with-kyle-morris[4] https://thechief.io/c/editorial/comparison-cloud-gpu-providers/[5] https://geekflare.com/best-cloud-gpu-platforms/
-
Why TensorFlow for Python is dying a slow death
"TensorFlow has the better deployment infrastructure"
Tensorflow Serving is nice in that it's so tightly integrated with Tensorflow. As usual that goes both ways. It's so tightly coupled to Tensorflow if the mlops side of the solution is using Tensorflow Serving you're going to get "trapped" in the Tensorflow ecosystem (essentially).
For pytorch models (and just about anything else) I've been really enjoying Nvidia Triton Server[0]. Of course it further entrenches Nvidia and CUDA in the space (although you can execute models CPU only) but for a deployment today and the foreseeable future you're almost certainly going to be using a CUDA stack anyway.
Triton Server is very impressive and I'm always surprised to see how relatively niche it is.
[0] - https://github.com/triton-inference-server/server
steampipe
- Steampipe: Dynamically query APIs, code and more with SQL
-
Cloud Tools You Probably Haven't Heard Of
Steampipe is a tool for querying cloud APIs and other data sources using SQL in a zero-ETL manner.
-
Show HN: Query Your Sheets with SheetSQL
Readers may also enjoy Steampipe [1], an open source CLI to live query Google Sheets [2] and 140+ other services with SQL (e.g. AWS, GitHub, etc). It uses Postgres Foreign Data Wrappers under the hood and supports joins etc across the services. (Disclaimer - I'm a lead on the project.)
1 - https://github.com/turbot/steampipe
-
Osquery: An sqlite3 virtual table exposing operating system data to SQL
be mindful of its AGPLv3 https://github.com/turbot/steampipe/blob/v0.21.8/LICENSE (AFAIK v0.4.3 is the last MIT release https://github.com/turbot/steampipe/blob/v0.4.3/LICENSE ) and the actual providers are Apache 2 <https://github.com/turbot/steampipe-plugin-aws/blob/v0.131.0...> (but I don't know if provider drift makes them compatible with 0.4 or not)
iasql seems to be AWS only, but good for them for taking this on:
-
How to run an AWS CIS v3.0 assessment in CloudShell
In a prior post I showed how to install Steampipe in AWS CloudShell to instantly query over 460+ resource types from your AWS APIs using SQL, and another post on how to use the Steampipe AWS Compliance mod to assess over 25+ security benchmarks across your AWS accounts.
- Git Query Language
- Query Cloud and SaaS APIs with SQL
-
Cutting down AWS cost by $150k per year simply by shutting things off
Readers may find Steampipe's [1] AWS Thrifty Mod [2] useful. It will automatically scan multiple accounts and regions for 50 cost saving opportunities - many of which are looking for over-provisioned or unused resources. For example, it's crazy how much you can save by doing things like just converting your EBS volumes to the newer gp3 type. Combine with Flowpipe [3] to automate checks and actions. It's all open source and extensible.
1 - https://github.com/turbot/steampipe
- FLaNK Weekly 08 Jan 2024
-
Zero-ETL for Postgres: Live-query cloud APIs with 100 open source FDWs
Steampipe [1] is an open source project [2] that includes an embedded Postgres to instantly query cloud, code & more with SQL. This release expands our plugin ecosystem [3] to be a full Zero-ETL platform. Steampipe plugins can now run natively in your own Postgres as Foreign Data Wrappers [4], as SQLite extensions [5] or as simple data export tools [6]. Please give it a try, we'd love your feedback and contributions!
1 - https://steampipe.io
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
cloudquery - The open source high performance ELT framework powered by Apache Arrow
onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX
cloud-custodian - Rules engine for cloud security, cost optimization, and governance, DSL in yaml for policies to query, filter, and take actions on resources
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
metriql - The metrics layer for your data. Join us at https://metriql.com/slack
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
inspec-aws - InSpec AWS Resource Pack https://www.inspec.io/
Triton - Triton is a dynamic binary analysis library. Build your own program analysis tools, automate your reverse engineering, perform software verification or just emulate code.
steampipe-mod-github-sherlock - Interrogate your GitHub resources with the help of the world's greatest detectives: Powerpipe + Steampipe + Sherlock.
Megatron-LM - Ongoing research training transformer models at scale
embedded-postgres-binaries - Lightweight bundles of PostgreSQL binaries with reduced size intended for testing purposes.