ComfyUI-Depth-Anything-Tensorrt
inference
ComfyUI-Depth-Anything-Tensorrt | inference | |
---|---|---|
2 | 6 | |
58 | 1,098 | |
- | 6.9% | |
7.7 | 9.9 | |
1 day ago | 3 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ComfyUI-Depth-Anything-Tensorrt
inference
-
YOLOv5 on FPGA with Hailo-8 and 4 Pi Cameras
Great question! I work for a computer vision company (Roboflow) and have seen computer vision used for everything from accident prevention on critical infrastructure to identifying defects on vehicle parts to detecting trading cards for use in video game applications.
Drawing bounding boxes is a common end point for demos, but for businesses using computer vision there is an entire world after that: on device deployment. This can be on devices like an NVIDIA Jetson (a very common choice), to Raspberry Pis to central CUDA GPU servers for processing large volumes of data (maybe connected to cameras over RTSP).
Note: There are many models that are faster and perform better than YOLOv5 (i.e. YOLOv8, YOLOv10, PaliGemma). Roboflow Inference that our ML team maintains has various guides on deploying models to the edge: https://inference.roboflow.com/#inference-pipeline
-
Supervision: Reusable Computer Vision
Yeah, inference[1] is our open source package for running locally (either directly in Python or via a Docker container). It works with all the models on Universe, models you train yourself (assuming we support the architecture; we have a bunch of notebooks available[2]), or train in our platform, plus several more general foundation models[3] (for things like embeddings, zero-shot detection, question answering, OCR, etc).
We also have a hosted API[4] you can hit for most models we support (except some of the large vision models that are really GPU-heavy) if you prefer.
[1] https://github.com/roboflow/inference
[2] https://github.com/roboflow/notebooks
[3] https://inference.roboflow.com/foundation/about/
[4] https://docs.roboflow.com/deploy/hosted-api
- Serverless development experience for embedded computer vision
- FLaNK Stack Weekly 16 October 2023
- Show HN: Pip install inference, open source computer vision deployment
What are some alternatives?
ComfyUI-IDM-VTON - ComfyUI adaptation of IDM-VTON for virtual try-on.
llmware - Unified framework for building enterprise RAG pipelines with small, specialized models
JsonGenius - Get structured JSON data from any page.
fast-data-dev - Kafka Docker for development. Kafka, Zookeeper, Schema Registry, Kafka-Connect, Landoop Tools, 20+ connectors
RealtimeTTS - Converts text to speech in realtime
Wails - Create beautiful applications using Go
karapace - Karapace - Your Apache Kafka® essentials in one tool
CML_AMP_AI_Text_Summarization_with_Amazon_Bedrock - CML_AMP_AI_Text_Summarization_with_Amazon_Bedrock
RealtimeSTT - A robust, efficient, low-latency speech-to-text library with advanced voice activity detection, wake word activation and instant transcription.
yolov7 - Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
Kouncil - Powerful dashboard for your Kafka. Monitor status, manage groups, topics, send messages and diagnose problems. All in one user friendly web dashboard.
openstatus - 🏓 The open-source synthetic & real user monitoring platform 🏓