dali_backend

The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API. (by triton-inference-server)

Dali_backend Alternatives

Similar projects and alternatives to dali_backend

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better dali_backend alternative or higher similarity.

dali_backend reviews and mentions

Posts with mentions or reviews of dali_backend. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-08.
  • Ollama releases OpenAI API compatibility
    12 projects | news.ycombinator.com | 8 Feb 2024
    - While keeping power utilization below X

    They will take the exported model and dynamically deploy the package to a triton instance running on your actual inference serving hardware, then generate requests to meet your SLAs to come up with the optimal model configuration. You even get exported metrics and pretty reports for every configuration used/attempted. You can take the same exported package, change the SLA params, and it will automatically re-generate the configuration for you.

    - Performance on a completely different level. TensorRT-LLM especially is extremely new and very early but already at high scale you can start to see > 10k RPS on a single node.

    - gRPC support. Especially when using pre/post processing, ensemble, etc you can configure clients programmatically to use the individual models or the ensemble chain (as one example). This opens up a very wide range of powerful architecture options that simply aren't available anywhere else. gRPC could probably be thought of as AsyncLLMEngine, it can abstract actual input/output or expose raw in/out so models, tokenizers, decoders, etc can send/receive raw data/numpy/tensors.

    - DALI support[5]. Combined with everything above, you can add DALI in the processing chain to do things like take input image/audio/etc, copy to GPU once, GPU accelerate scaling/conversion/resampling/whatever, and get output.

    vLLM and HF TGI are very cool and I use them in certain cases. The fact you can give them a HF model and they just fire up with a single command and offer good performance is very impressive but there are an untold number of reasons these providers use Triton. It's in a class of its own.

    [0] - https://mistral.ai/news/la-plateforme/

    [1] - https://www.cloudflare.com/press-releases/2023/cloudflare-po...

    [2] - https://www.nvidia.com/en-us/case-studies/amazon-accelerates...

    [3] - https://github.com/triton-inference-server/model_navigator

    [4] - https://github.com/triton-inference-server/client/blob/main/...

    [5] - https://github.com/triton-inference-server/dali_backend

Stats

Basic dali_backend repo stats
1
117
6.8
25 days ago

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com