kanshi
tensorrt_demos
kanshi | tensorrt_demos | |
---|---|---|
22 | 5 | |
553 | 1,720 | |
- | - | |
5.9 | 3.1 | |
over 2 years ago | about 1 year ago | |
C | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kanshi
-
Sway external display
Without digging into you problem, i just let you know one options/tool as an addition: https://github.com/emersion/kanshi
-
Starting kanshi from sway
This works, but after a reload of the config the configuration is gone. Then I found this discussion and changed the line to:
-
Arch users belike
Kanshi is not exactly what you're looking for but i stumbled across this when I was writing the dynamic display configuration page.
-
Terminal font size on hidpi + normal display
You should use different scale factors on each monitor. You can do it in your sway configuration, or use something like kanshi that automatically applies different settings based on what is connected.
-
Inaccessible workspaces
If this fixes your problem you'll want to apply this fix: Disable Laptop screen upon closing screen. or you can use something like Automatic display profile switcher when you connect your external display to switch it to while also disabling your laptop screen.
-
How to toggle transparency and gaps? Preserve display configuration?
I also heard of kanshi for monitor configuration, but I believe there's an open bug where sway reload breaks kanshi config... which kind of defeats the purpose.
- Single Background / Multiple Monitors
-
Script for docked mode
kanshi should be able to do it.
-
Organising workspaces on multiple monitors
Assuming your connected monitors may change - you plug another one in and then you want to move workspaces the new output: kanshi can help you with that. You can tell kanshi to execute commands when it matches a profile.
-
Sway not picking the highest refresh rate available
For this you can use kanshi.
tensorrt_demos
-
lowering size of YOLOV4 detection model
tensorrt_demo github repository
-
Jetson Nano: TensorFlow model. Possibly I should use PyTorch instead?
https://github.com/NVIDIA-AI-IOT/torch2trt <- pretty straightforward https://github.com/jkjung-avt/tensorrt_demos <- this helped me a lot
-
PyTorch 1.8 release with AMD ROCm support
> I'll also add a caveat that toolage for Jetson boards is extremely incomplete.
A hundred times this. I was about to write another rant here but I already did that[0] a while ago, so I'll save my breath this time. :)
Another fun fact regarding toolage: Today I discovered that many USB cameras work poorly on Jetsons (at least when using OpenCV), probably due to different drivers and/or the fact that OpenCV doesn't support ARM64 as well as it does x86_64. :(
> They supply you with a bunch of sorely outdated models for TensorRT like Inceptionv3 and SSD-MobileNetv2 and VGG-16.
They supply you with such models? That's news to me. AFAIK converting something like SSD-MobileNetv2 from TensorFlow to TensorRT still requires substantial manual work and magic, as this code[1] attests to. There are countless (countless!) posts on the Nvidia forums by people complaining that they're not able to convert their models.
[0]: https://news.ycombinator.com/item?id=26004235
[1]: https://github.com/jkjung-avt/tensorrt_demos/blob/master/ssd... (In fact, this is the only piece of code I've found on the entire internet that managed to successfully convert my SSD-MobileNetV2.)
- I'm tired of this anti-Wayland horseshit
-
H.264 hardware acceleration for surveillance station performance
It was some work getting compiled on nano but I used this guy's work to get started. https://jkjung-avt.github.io/tensorrt-yolov4/ and https://github.com/jkjung-avt/tensorrt_demos
What are some alternatives?
wlr-randr - An xrandr clone for wlroots compositors
YOLOX - YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/
wayvnc - A VNC server for wlroots based Wayland compositors
torch2trt - An easy to use PyTorch to TensorRT converter
dwl - dwm for Wayland - ARCHIVE: development has moved to Codeberg
yolov4-custom-functions - A Wide Range of Custom Functions for YOLOv4, YOLOv4-tiny, YOLOv3, and YOLOv3-tiny Implemented in TensorFlow, TFLite, and TensorRT.
wdisplays - GUI display configurator for wlroots compositors
tensorflow-yolov4-tflite - YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.3.1, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite
wayland - Core Wayland protocol and libraries (mirror)
jetson-inference - Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
wdisplays