gitautomator
autodistill
gitautomator | autodistill | |
---|---|---|
1 | 17 | |
6 | 2,060 | |
- | 2.7% | |
2.8 | 8.1 | |
10 months ago | about 2 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gitautomator
autodistill
- Ask HN: Who is hiring? (December 2024)
- Ask HN: Who is hiring? (November 2024)
- Ask HN: Who is hiring? (October 2024)
- Sam 2: Segment Anything in Images and Videos
- Ask HN: Who is hiring? (February 2024)
-
Is supervised learning dead for computer vision?
The places in which a vision model is deployed are different than that of a language model.
A vision model may be deployed on cameras without an internet connection, with data retrieved later; a vision model may be used on camera streams in a factory; sports broadcasts on which you need low latency. In many cases, real-time -- or close to real-time -- performance is needed.
Fine-tuned models can deliver the requisite performance for vision tasks with relatively low computational power compared to the LLM equivalent. The weights are small relative to LLM weights.
LLMs are often deployed via API. This is practical for some vision applications (i.e. bulk processing), but for many use cases not being able to run on the edge is a dealbreaker.
Foundation models certainly have a place.
CLIP, for example, works fast, and may be used for a task like classification on videos. Where I see opportunity right now is in using foundation models to train fine-tuned models. The foundation model acts as an automatic labeling tool, then you can use that model to get your dataset. (Disclosure: I co-maintain a Python package that lets you do this, Autodistill -- https://github.com/autodistill/autodistill).
SAM (segmentation), CLIP (embeddings, classification), Grounding DINO (zero-shot object detection) in particular have a myriad of use cases, one of which is automated labeling.
I'm looking forward to seeing foundation models improve for all the opportunities that will bring!
- Ask HN: Who is hiring? (October 2023)
-
Autodistill: A new way to create CV models
Autodistill
- Show HN: Autodistill, automated image labeling with foundation vision models
-
Show HN: Pip install inference, open source computer vision deployment
Thanks for the suggestion! Definitely agree, we’ve seen that work extremely well for Supervision[1] and Autodistill, some of our other open source projects.
There’s still a lot of polish like this we need to do; we’ve spent most of our effort cleaning up the code and documentation to prep for open sourcing the repo.
Next step is improving the usability of the pip pathway (that interface was just added; the http server was all we had for internal use). Then we’re going to focus on improving the content and expanding the models it supports.
[1] https://github.com/roboflow/supervision
[2] https://github.com/autodistill/autodistill