triton
automatic
triton | automatic | |
---|---|---|
30 | 185 | |
11,054 | 4,745 | |
4.3% | - | |
9.9 | 9.9 | |
3 days ago | about 4 hours ago | |
C++ | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
triton
- OpenAI Triton: language and compiler for highly efficient Deep-Learning
-
Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
There's a ton of cool opportunity in the runtime layer. I've been keeping my eye on the compiler-based approaches. From what I've gathered many of the larger "production" inference tools use compilers:
- https://github.com/openai/triton
- Core Functionality for AMD #1983
- Project name easily confused with Nvidia triton
-
Nvidia's CUDA Monopoly
Does anyone have more inside knowledge from OpenAI or AMD on AMDGPU support for Triton?
I see this:
https://github.com/openai/triton/issues/1073
But it's not clear to me if we will see AMD GPUs as first class citizens for pytorch in the future?
- @soumithchintala (Cofounded and lead @PyTorch at Meta) on Twitter: I'm fairly puzzled by $NVDA skyrocketing... (cont.)
-
The tiny corp raised $5.1M
I thought this was a good overview of the idea Triton can circumvent the CUDA moat: https://www.semianalysis.com/p/nvidiaopenaitritonpytorch
It also looks like they added MLIR backend to Triton though I wonder if Mojo has advantages since it was built on MLIR? https://github.com/openai/triton/pull/1004
-
Anyone hosting a local LLM server
I'm pretty happy with the setup, because it allows me to keep all the AI stuff and its dozens of conda envs and repos etc. seperate from my normal setup and "portable". It may have some performance impact (although I don't personally notice any significant difference to running it "natively" on windows), and it may enable some extra functionality, such as access to OpenAi's Triton etc., but that's currently neither here nor there.
- Triton: Runtime for highly efficient custom Deep-Learning primitives
-
Mojo – a new programming language for all AI developers
Very cool development. There is too much busy work going from development to test to production. This will help to unify everything. OpenAI Triton https://github.com/openai/triton/ is going for a similar goal. But this is a more fundamental approach.
automatic
-
Open-source project ZLUDA lets CUDA apps run on AMD GPUs
> it won't ever be a viable option
For production workloads, I generally agree. It's an unsupported hack with a questionable future, I wouldn't do anything money-making with it.
However, for tinkering and consumer workloads, it already works pretty well. Enough of cuDNN and cuBLAS work to run PyTorch and in turn, Stable Diffusion with https://github.com/lshqqytiger/ZLUDA - there's even a fairly user-friendly setup process already in https://github.com/vladmandic/automatic .
I was able to get a personal non-ML related project working on my AMD card in just a few minutes, which saved me a lot of development time before I then deployed the production workload on NV hardware (this is probably why AMD pulled the plug on the project - it's almost more of a boost to NV than anything else, AMD really need people to be writing code on ROCm to deploy on AMD datacenter hardware).
-
Show HN: Comflowy – A ComfyUI Tutorial for Beginners
While I currently use SD.Next[1], I have tested ComfyUI locally with my AMD card. The UI can be daunting, but you learn quite a great deal about how a Stable Diffusion pipeline works. In addition some innovations and advances find their way into ComfyUI first.
[1] https://github.com/vladmandic/automatic
-
Just me or SDXL is bad for rendering trees, grasses, vegetation in general ? Looks a stop motion or unfinished painting. How can I fix it ?
I used SD.NEXT ( https://github.com/vladmandic/automatic ) and https://civitai.com/models/82098/add-more-details-detail-enhancer-tweaker-lora and epicphotogasm_lastUnicorn
-
Is SDXL supposed to be this slow on my system?
I found this thread on GitHub talking about how this was fixed in the latest version with an optional setting. I tried enabling it, as they mentioned, but it just resulted in an immediate CUDA out of memory error when starting generation. So it seems I'm actually needing the shared memory, which I assume is my issue.
-
Another Monday, another big release from SDNext!
As always, do check out our more detailed changelog, give us a quick install from our Repo, and stop by our Discord Server for any questions or help you may need.
-
What's the best stable diffusion client for base m1 MacBook air?
SD.Next
- Intel Arc 770 with Linux Mint, support requested!
-
SDNext - Controlnet keeps being disabled after installing SDXL ?
Today I finally wanted to give SDXL a chance, so I set everythin up according to Vladmandic's Wiki https://github.com/vladmandic/automatic/wiki/SD-XL
-
Vlad SD.Next SDXL DirectML: 'StableDiffusionXLPipeline' object has no attribute 'alphas_cumprod'
I'm trying to get SDXL working on Vlad's SDNext, but I keep getting the error in the title when trying to run basic operations. I'm not sure what's going on, I followed his guide for it to a T.
-
[P] Stable Diffusion XL (SDXL) Benchmark - 769 images per dollar on consumer GPUs
We used an inference container based on SDNext, along with a custom worker written in Typescript that implemented the job processing pipeline. The worker used HTTP to communicate with both the SDNext container and with our batch framework.
What are some alternatives?
cuda-python - CUDA Python Low-level Bindings
SHARK - SHARK - High Performance Machine Learning Distribution
Halide - a language for fast, portable data-parallel computation
stable-diffusion-webui-colab - stable diffusion webui colab
GPU-Puzzles - Solve puzzles. Learn CUDA.
kohya_ss
dfdx - Deep learning in Rust, with shape checked tensors and neural networks
stable-diffusion-webui-ux - Stable Diffusion web UI UX
web-llm - Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
cutlass - CUDA Templates for Linear Algebra Subroutines
stable-diffusion-webui-wd14-tagger - Labeling extension for Automatic1111's Web UI