autodistill
Milvus
autodistill | Milvus | |
---|---|---|
13 | 105 | |
1,552 | 27,068 | |
5.3% | 2.8% | |
9.2 | 10.0 | |
about 1 month ago | 6 days ago | |
Python | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
autodistill
-
Ask HN: Who is hiring? (February 2024)
Roboflow | Open Source Software Engineer, Web Designer / Developer, and more. | Full-time (Remote, SF, NYC) | https://roboflow.com/careers?ref=whoishiring0224
Roboflow is the fastest way to use computer vision in production. We help developers give their software the sense of sight. Our end-to-end platform[1] provides tooling for image collection, annotation, dataset exploration and curation, training, and deployment.
Over 250k engineers (including engineers from 2/3 Fortune 100 companies) build with Roboflow. We now host the largest collection of open source computer vision datasets and pre-trained models[2]. We are pushing forward the CV ecosystem with open source projects like Autodistill[3] and Supervision[4]. And we've built one of the most comprehensive resources for software engineers to learn to use computer vision with our popular blog[5] and YouTube channel[6].
We have several openings available but are primarily looking for strong technical generalists who want to help us democratize computer vision and like to wear many hats and have an outsized impact. Our engineering culture is built on a foundation of autonomy & we don't consider an engineer fully ramped until they can "choose their own loss function". At Roboflow, engineers aren't just responsible for building things but also for helping us figure out what we should build next. We're builders & problem solvers; not just coders. (For this reason we also especially love hiring past and future founders.)
We're currently hiring full-stack engineers for our ML and web platform teams, a web developer to bridge our product and marketing teams, several technical roles on the sales & field engineering teams, and our first applied machine learning researcher to help push forward the state of the art in computer vision.
[1]: https://roboflow.com/?ref=whoishiring0224
[2]: https://roboflow.com/universe?ref=whoishiring0224
[3]: https://github.com/autodistill/autodistill
[4]: https://github.com/roboflow/supervision
[5]: https://blog.roboflow.com/?ref=whoishiring0224
[6]: https://www.youtube.com/@Roboflow
-
Is supervised learning dead for computer vision?
The places in which a vision model is deployed are different than that of a language model.
A vision model may be deployed on cameras without an internet connection, with data retrieved later; a vision model may be used on camera streams in a factory; sports broadcasts on which you need low latency. In many cases, real-time -- or close to real-time -- performance is needed.
Fine-tuned models can deliver the requisite performance for vision tasks with relatively low computational power compared to the LLM equivalent. The weights are small relative to LLM weights.
LLMs are often deployed via API. This is practical for some vision applications (i.e. bulk processing), but for many use cases not being able to run on the edge is a dealbreaker.
Foundation models certainly have a place.
CLIP, for example, works fast, and may be used for a task like classification on videos. Where I see opportunity right now is in using foundation models to train fine-tuned models. The foundation model acts as an automatic labeling tool, then you can use that model to get your dataset. (Disclosure: I co-maintain a Python package that lets you do this, Autodistill -- https://github.com/autodistill/autodistill).
SAM (segmentation), CLIP (embeddings, classification), Grounding DINO (zero-shot object detection) in particular have a myriad of use cases, one of which is automated labeling.
I'm looking forward to seeing foundation models improve for all the opportunities that will bring!
- Ask HN: Who is hiring? (October 2023)
-
Autodistill: A new way to create CV models
Autodistill
- Show HN: Autodistill, automated image labeling with foundation vision models
-
Show HN: Pip install inference, open source computer vision deployment
Thanks for the suggestion! Definitely agree, we’ve seen that work extremely well for Supervision[1] and Autodistill, some of our other open source projects.
There’s still a lot of polish like this we need to do; we’ve spent most of our effort cleaning up the code and documentation to prep for open sourcing the repo.
Next step is improving the usability of the pip pathway (that interface was just added; the http server was all we had for internal use). Then we’re going to focus on improving the content and expanding the models it supports.
[1] https://github.com/roboflow/supervision
[2] https://github.com/autodistill/autodistill
-
Ask HN: Who is hiring? (August 2023)
Roboflow | Multiple Roles | Full-time (Remote, SF, NYC) | https://roboflow.com/careers?ref=whoishiring0823
Roboflow is the fastest way to use computer vision in production. We help developers give their software the sense of sight. Our end-to-end platform[1] provides tooling for image collection, annotation, dataset exploration and curation, training, and deployment.
Over 250k engineers (including engineers from 2/3 Fortune 100 companies) build with Roboflow. We now host the largest collection of open source computer vision datasets and pre-trained models[2]. We are pushing forward the CV ecosystem with open source projects like Autodistill[3] and Supervision[4]. And we've built one of the most comprehensive resources for software engineers to learn to use computer vision with our popular blog[5] and YouTube channel[6].
We have several openings available, but are primarily looking for strong technical generalists who want to help us democratize computer vision and like to wear many hats and have an outsized impact. Our engineering culture is built on a foundation of autonomy & we don't consider an engineer fully ramped until they can "choose their own loss function". At Roboflow, engineers aren't just responsible for building things but also for helping figure out what we should build next. We're builders & problem solvers; not just coders. (For this reason we also especially love hiring past and future founders.)
We're currently hiring full-stack engineers for our ML and web platform teams, a web developer to bridge our product and marketing teams, several technical roles on the sales & field engineering teams, and our first applied machine learning researcher to help push forward the state of the art in computer vision.
[1]: https://roboflow.com/?ref=whoishiring0823
[2]: https://roboflow.com/universe?ref=whoishiring0823
[3]: https://github.com/autodistill/autodistill
[4]: https://github.com/roboflow/supervision
[5]: https://blog.roboflow.com/?ref=whoishiring0823
[6]: https://www.youtube.com/@Roboflow
-
AI That Teaches Other AI
> Their SKILL tool involves a set of algorithms that make the process go much faster, they said, because the agents learn at the same time in parallel. Their research showed if 102 agents each learn one task and then share, the amount of time needed is reduced by a factor of 101.5 after accounting for the necessary communications and knowledge consolidation among agents.
This is a really interesting idea. It's like the reverse of knowledge distillation (which I've been thinking about a lot[1]) where you have one giant model that knows a lot about a lot & you use that model to train smaller, faster models that know a lot about a little.
Instead, you if you could train a lot of models that know a lot about a little (which is a lot less computationally intensive because the problem space is so confined) and combine them into a generalized model, that'd be hugely beneficial.
Unfortunately, after a bit of digging into the paper & Github repo[2], this doesn't seem to be what's happening at all.
> The code will learn 102 small and separte heads(either a linear head or a linear head with a task bias) for each tasks respectively in order. This step can be parallized on multiple GPUS with one task per GPU. The heads will be saved in the weight folder. After that, the code will learn a task mapper(Either using GMMC or Mahalanobis) to distinguish image task-wisely. Then, all images will be evaluated in the same time without a task label.
So the knowledge isn't being combined (and the agents aren't learning from each other) into a generalized model. They're just training a bunch of independent models for specific tasks & adding a model-selection step that maps an image to the most relevant "expert". My guess is you could do the same thing using CLIP vectors as the routing method to supervised models trained on specific datasets (we found that datasets largely live in distinct regions of CLIP-space[3]).
[1] https://github.com/autodistill/autodistill
[2] https://github.com/gyhandy/Shared-Knowledge-Lifelong-Learnin...
[3] https://www.rf100.org
- Autodistill: Use foundation vision models to train smaller, supervised models
- Autodistill: use big slow foundation models to train small fast supervised models (r/MachineLearning)
Milvus
-
Computer Vision Meetup: Develop a Legal Search Application from Scratch using Milvus and DSPy!
Legal practitioners often need to find specific cases and clauses across thousands of dense documents. While traditional keyword-based search techniques are useful, they fail to fully capture semantic content of queries and case files. Vector search engines and large language models provide an intriguing alternative. In this talk, I will show you how to build a legal search application using the DSPy framework and the Milvus vector search engine.
-
Ask HN: Who is hiring? (April 2024)
Zilliz (zilliz.com) | Hybrid/ONSITE (SF, NYC) | Full-time
I am part of the hiring team for DevRel
NYC - https://boards.greenhouse.io/zilliz/jobs/4307910005
SF - https://boards.greenhouse.io/zilliz/jobs/4317590005
Zilliz is the company behind Milvus (https://github.com/milvus-io/milvus), the most starred vector database on GitHub. Milvus is a distributed vector database that shines in 1B+ vector use cases. Examples include autonomous driving, e-commerce, and drug discovery. (and, of course, RAG)
We are also hiring for other roles that I am not personally involved in the hiring process for such as product managers, software engineers, and recruiters.
-
Unlock Advanced Search Capabilities with Milvus and Read about RAG
Get started with Milvus on GitHub.
-
Milvus VS pgvecto.rs - a user suggested alternative
2 projects | 13 Mar 2024
-
How to choose the right type of database
Milvus: An open-source vector database designed for AI and ML applications. It excels in handling large-scale vector similarity searches, making it suitable for recommendation systems, image and video retrieval, and natural language processing tasks.
-
Simplifying the Milvus Selection Process
Selecting the right version of open-source Milvus is important to the success of any project leveraging vector search technology. With Milvus offering different versions of its vector database tailored to varying requirements, understanding the significance of selecting the correct version is key for achieving desired outcomes.
-
7 Vector Databases Every Developer Should Know!
Milvus is an open-source vector database designed to handle large-scale similarity search and vector indexing. It supports multiple index types and offers highly efficient search capabilities, making it suitable for a wide range of AI and ML applications, including image and video recognition, natural language processing, and recommendation systems.
-
Ask HN: Who is hiring? (February 2024)
Zilliz is hiring! We're looking for REMOTE and/or HYBRID roles in SF
Zilliz is the company behind Milvus (https://github.com/milvus-io/milvus), the most widely adopted vector database. Vector databases are a crucial piece of any technology stack looking to take advantage of unstructured data. Most recently and notably, Retrieval Augmented Generation (RAG). For RAG, vector databases like Milvus are used as the tool to inject customized data. In other words, vector databases make things like customized chat bots, personalized product recommendations, and more possible.
We are hiring for Developer Advocates, Senior+ Level Engineers and Product people, and Talent Acquisition. Check out all the roles here: https://zilliz.com/careers
-
Qdrant, the Vector Search Database, raised $28M in a Series A round
Good on them, I know the crustaceans are out here happy about this raise for a Rust based Vector DB!
(now I'm gonna plug what I work on)
If you're interested in a more scalable vector database written in Go, check out Milvus (https://github.com/milvus-io/milvus)
-
Open Source Advent Fun Wraps Up!
But before we do, I do want to say that 🤩 all these lovely Open-Source projects would love a little 🎉💕 love by getting a GitHub star ⭐ for their efforts. Including Open Source Milvus 🥰
What are some alternatives?
anylabeling - Effortless AI-assisted data labeling with AI support from YOLO, Segment Anything, MobileSAM!!
pgvector - Open-source vector similarity search for Postgres
tabby - Self-hosted AI coding assistant
faiss - A library for efficient similarity search and clustering of dense vectors.
Shared-Knowledge-Lifelong-Learnin
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
segment-geospatial - A Python package for segmenting geospatial data with the Segment Anything Model (SAM)
Weaviate - Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database.
opentofu - OpenTofu lets you declaratively manage your cloud infrastructure.
Elasticsearch - Free and Open, Distributed, RESTful Search Engine
supervision - We write your reusable computer vision tools. 💜
Face Recognition - The world's simplest facial recognition api for Python and the command line