-
You can always slice the images into smaller ones, run detection on each tile, and combine results. Supervision has a utility for this - https://supervision.roboflow.com/latest/detection/tools/infe..., but it only works with detections. You can get a much more accurate result this way. Here is some side-by-side comparison: https://github.com/roboflow/supervision/releases/tag/0.14.0.
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
inference
Turn any computer or edge device into a command center for your computer vision projects. (by roboflow)
Yeah, inference[1] is our open source package for running locally (either directly in Python or via a Docker container). It works with all the models on Universe, models you train yourself (assuming we support the architecture; we have a bunch of notebooks available[2]), or train in our platform, plus several more general foundation models[3] (for things like embeddings, zero-shot detection, question answering, OCR, etc).
We also have a hosted API[4] you can hit for most models we support (except some of the large vision models that are really GPU-heavy) if you prefer.
[1] https://github.com/roboflow/inference
[2] https://github.com/roboflow/notebooks
[3] https://inference.roboflow.com/foundation/about/
[4] https://docs.roboflow.com/deploy/hosted-api
-
notebooks
This repository offers a comprehensive collection of tutorials on state-of-the-art computer vision models and techniques. Explore everything from foundational architectures like ResNet to cutting-edge models like YOLO11, RT-DETR, SAM 2, Florence-2, PaliGemma 2, and Qwen2.5VL.
Yeah, inference[1] is our open source package for running locally (either directly in Python or via a Docker container). It works with all the models on Universe, models you train yourself (assuming we support the architecture; we have a bunch of notebooks available[2]), or train in our platform, plus several more general foundation models[3] (for things like embeddings, zero-shot detection, question answering, OCR, etc).
We also have a hosted API[4] you can hit for most models we support (except some of the large vision models that are really GPU-heavy) if you prefer.
[1] https://github.com/roboflow/inference
[2] https://github.com/roboflow/notebooks
[3] https://inference.roboflow.com/foundation/about/
[4] https://docs.roboflow.com/deploy/hosted-api