-
inference
Turn any computer or edge device into a command center for your computer vision projects. (by roboflow)
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
Thanks for the suggestion! Definitely agree, we’ve seen that work extremely well for Supervision[1] and Autodistill, some of our other open source projects.
There’s still a lot of polish like this we need to do; we’ve spent most of our effort cleaning up the code and documentation to prep for open sourcing the repo.
Next step is improving the usability of the pip pathway (that interface was just added; the http server was all we had for internal use). Then we’re going to focus on improving the content and expanding the models it supports.
[1] https://github.com/roboflow/supervision
[2] https://github.com/autodistill/autodistill
-
autodistill
Images to inference with no labeling (use foundation models to train supervised models).
Thanks for the suggestion! Definitely agree, we’ve seen that work extremely well for Supervision[1] and Autodistill, some of our other open source projects.
There’s still a lot of polish like this we need to do; we’ve spent most of our effort cleaning up the code and documentation to prep for open sourcing the repo.
Next step is improving the usability of the pip pathway (that interface was just added; the http server was all we had for internal use). Then we’re going to focus on improving the content and expanding the models it supports.
[1] https://github.com/roboflow/supervision
[2] https://github.com/autodistill/autodistill
-
blackjack-basic-strategy
A computer vision powered Blackjack basic strategy app powered by Roboflow.
It’s an easy to use inference server for computer vision models.
The end result is a Docker container that serves a standardized API as a microservice that your application uses to get predictions from computer vision models (though there is also a native Python interface).
It’s backed by a bunch of component pieces:
* a server (so you don’t have to reimplement things like image processing & prediction visualization on every project)
* standardized APIs for computer vision tasks (so switching out the model weights and architecture can be done independently of your application code)
* model architecture implementations (which implement the tensor parsing glue between images & predictions) for supervised models that you've fine-tuned to perform custom tasks
* foundation model implementations (like CLIP & SAM) that tend to chain well with fine-tuned models
* reusable utils to make adding support for new models easier
* a model registry (so your code can be independent from your model weights & you don't have to re-build and re-deploy every time you want to iterate on your model weights)
* data management integrations (so you can collect more images of edge cases to improve your dataset & model the more it sees in the wild)
* ecosystem (there are tens of thousands of fine-tuned models shared by users that you can use off the shelf via Roboflow Universe[1])
Additionally, since it's focused specifically on computer vision, it has specific CV-focused features (like direct camera stream input) and makes some different tradeoffs than other more general ML solutions (namely, optimized for small-fast models that run at the edge & need support for running on many different devices like NVIDIA Jetsons and Raspberry Pis in addition to beefy cloud servers).
[1] https://universe.roboflow.com