tvm
nebuly
tvm | nebuly | |
---|---|---|
15 | 105 | |
11,186 | 8,363 | |
1.3% | 0.1% | |
9.9 | 8.4 | |
2 days ago | 6 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tvm
-
Making AMD GPUs competitive for LLM inference
Yes, this is coming! Myself and others at OctoML and in the TVM community are actively working on multi-gpu support in the compiler and runtime. Here are some of the merged and active PRs on the multi-GPU (multi-device) roadmap:
Support in TVM’s graph IR (Relax) - https://github.com/apache/tvm/pull/15447
-
VSL; Vlang's Scientific Library
Would it make sense to have a backend support for OpenXLA, Apache TVM, Jittor or other similar to get free GPU, TPU and other accelerators for free ?
- Apache TVM
-
MLC LLM - "MLC LLM is a universal solution that allows any language model to be deployed natively on a diverse set of hardware backends and native applications, plus a productive framework for everyone to further optimize model performance for their own use cases."
I have tried the iPhone app. It's fast. They're using Apache TVM which should allow better use of native accelerators on different devices. Like using metal on Apple and Vulcan or CUDA or whatever instead of just running the thing on the CPU like llama.cpp.
-
ONNX Runtime merges WebGPU back end
I was going to answer the same, I find the approach of machine learning compilers that directly compile models to host and device code better than having to bring a huge runtime. There are exciting projects in this area like TVM Unity, IREE [2], or torch.export [3]
[1] https://github.com/apache/tvm/tree/unity
[2] https://pytorch.org/get-started/pytorch-2.0/#inference-and-e...
[3] https://pytorch.org/get-started/pytorch-2.0/#inference-and-e...
-
Esp32 tensorflow lite
Apache TVM home page: https://tvm.apache.org/
-
Decompiling x86 Deep Neural Network Executables
It's pretty clear its referring to the output of Apache TVM and Meta's Glow
-
Run Stable Diffusion on Your M1 Mac’s GPU
As mentioned in sibling comments, Torch is indeed the glue in this implementation. Other glues are TVM[0] and ONNX[1]
These just cover the neural net though, and there is lots of surrounding code and pre-/post-processing that isn't covered by these systems.
For models on Replicate, we use Docker, packaged with Cog for this stuff.[2] Unfortunately Docker doesn't run natively on Mac, so if we want to use the Mac's GPU, we can't use Docker.
I wish there was a good container system for Mac. Even better if it were something that spanned both Mac and Linux. (Not as far-fetched as it seems... I used to work at Docker and spent a bit of time looking into this...)
[0] https://tvm.apache.org/
-
How to get started with machine learning.
Or use TVM, the idea is to compile your model into code that you can load at runtime. Similar to onnxruntime, it only does DNN inference; so you need domain-specific code.
-
An open-source library for optimizing deep learning inference. (1) You select the target optimization, (2) nebullvm searches for the best optimization techniques for your model-hardware configuration, and then (3) serves an optimized model that runs much faster in inference
Open-source projects leveraged by nebullvm include OpenVINO, TensorRT, Intel Neural Compressor, SparseML and DeepSparse, Apache TVM, ONNX Runtime, TFlite and XLA. A huge thank you to the open-source community for developing and maintaining these amazing projects.
nebuly
- Nebuly – The LLM Analytics Platform
- Ask HN: Any tools or frameworks to monitor the usage of OpenAI API keys?
-
What are you building with LLMs? I'm writing an article about what people are building with LLMs
Hi everyone. I’m the creator of ChatLLaMA https://github.com/nebuly-ai/nebullvm/tree/main/apps/accelerate/chatllama, an opensource framework to train LLMs with limited resources and create There’s been amazing usage of LLMs in these days, from chatbots to retrieve about company’s product information, to cooking assistants for traditional dishes, and much more. And you? What you building or would love to build with LLMs? Let me know and I’ll share the article about your stories soon. https://qpvirevo4tz.typeform.com/to/T3PruEuE Cheers
-
Show HN: ChatLLaMA – A ChatGPT style chatbot for Facebook's LLaMA
How does it differentiate from the original ChatLLaMA? https://github.com/nebuly-ai/nebullvm/tree/main/apps/acceler...
-
🤖🌟 Unlock the Power of Personal AI: Introducing ChatLLaMA, Your Custom Personal Assistant! 🚀💬
Was this made with the ChatLLaMA library? https://github.com/nebuly-ai/nebullvm/tree/main/apps/accelerate/chatllama
- Meta LLM LLaMA leaked, all over the internet as we speak
- Meta LLM LLAMA leaked, it's all over the internet as we speak.
- Meta LLM LLAMMA leaked, it's all over the internet as we speak.
-
Plug and play modules to optimize the performances of your AI systems
Some of the available modules include:
Speedster: Automatically apply the best set of SOTA optimization techniques to achieve the maximum inference speed-up on your hardware. https://github.com/nebuly-ai/nebullvm/blob/main/apps/acceler...
Nos: Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elastic quotas. https://github.com/nebuly-ai/nos
ChatLLaMA: Build faster and cheaper ChatGPT-like training process based on LLaMA architectures. https://github.com/nebuly-ai/nebullvm/tree/main/apps/acceler...
OpenAlphaTensor: Increase the computational performances of an AI model with custom-generated matrix multiplication algorithm fine-tuned for your specific hardware. https://github.com/nebuly-ai/nebullvm/tree/main/apps/acceler...
Forward-Forward: The Forward Forward algorithm is a method for training deep neural networks that replaces the backpropagation forward and backward passes with two forward passes. https://github.com/nebuly-ai/nebullvm/tree/main/apps/acceler...
- Open source implementation for LLaMA-based ChatGPT
What are some alternatives?
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
AITemplate - AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
alpaca-lora - Instruct-tune LLaMA on consumer hardware
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
stable-diffusion
deepsparse - Sparsity-aware deep learning inference runtime for CPUs
openvino - OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference