Our great sponsors
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Yeah, if they're going to compare techniques breaking various features and extensions, I think it's only fair to compare it to similarly incompatible techniques for nvidia.
In which case, I would recommend the AITemplate[0] extension for ComfyUI, which runing at a (hopefully) standardized 512x512 res, default euler A sampler, nets me about 23.8 it/s on my 300w 3090.
[0] https://github.com/FizzleDorf/AIT
> The AMD experience on Linux is vastly better than the Nvidia one.
I just wish we had an equivalent of AMD Software on Linux, so I could mess around with the settings more.
For example, I like to limit the GPU to 50-75% of it's total power for ambient heat/cooling reasons, or UPS/PSU/electricity bill reasons when specific games make it hard to cap framerates.
With AMD Software on Windows, it's no big deal. On Linux, the best I found was CoreCtrl: https://gitlab.com/corectrl/corectrl
Sadly, it doesn't seem to work all that well for my use case, which I mentioned in my blog post when using Linux instead of Windows as my daily driver at home too: https://blog.kronis.dev/articles/a-week-of-linux-instead-of-...
> You see, by default the card controls its own GPU and memory clock values, which means that when idle the GPU draws around 40 W of power. However, if I want to set a limit for how much W in total it can use, it also makes me set the GPU and memory clock values, which will them be fixed: so at idle the GPU will use about 60 W of power.