autodistill VS blackjack-basic-strategy

Compare autodistill vs blackjack-basic-strategy and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
autodistill blackjack-basic-strategy
13 23
1,552 26
5.3% -
9.2 2.0
about 1 month ago about 1 year ago
Python JavaScript
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

autodistill

Posts with mentions or reviews of autodistill. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-01.
  • Ask HN: Who is hiring? (February 2024)
    18 projects | news.ycombinator.com | 1 Feb 2024
    Roboflow | Open Source Software Engineer, Web Designer / Developer, and more. | Full-time (Remote, SF, NYC) | https://roboflow.com/careers?ref=whoishiring0224

    Roboflow is the fastest way to use computer vision in production. We help developers give their software the sense of sight. Our end-to-end platform[1] provides tooling for image collection, annotation, dataset exploration and curation, training, and deployment.

    Over 250k engineers (including engineers from 2/3 Fortune 100 companies) build with Roboflow. We now host the largest collection of open source computer vision datasets and pre-trained models[2]. We are pushing forward the CV ecosystem with open source projects like Autodistill[3] and Supervision[4]. And we've built one of the most comprehensive resources for software engineers to learn to use computer vision with our popular blog[5] and YouTube channel[6].

    We have several openings available but are primarily looking for strong technical generalists who want to help us democratize computer vision and like to wear many hats and have an outsized impact. Our engineering culture is built on a foundation of autonomy & we don't consider an engineer fully ramped until they can "choose their own loss function". At Roboflow, engineers aren't just responsible for building things but also for helping us figure out what we should build next. We're builders & problem solvers; not just coders. (For this reason we also especially love hiring past and future founders.)

    We're currently hiring full-stack engineers for our ML and web platform teams, a web developer to bridge our product and marketing teams, several technical roles on the sales & field engineering teams, and our first applied machine learning researcher to help push forward the state of the art in computer vision.

    [1]: https://roboflow.com/?ref=whoishiring0224

    [2]: https://roboflow.com/universe?ref=whoishiring0224

    [3]: https://github.com/autodistill/autodistill

    [4]: https://github.com/roboflow/supervision

    [5]: https://blog.roboflow.com/?ref=whoishiring0224

    [6]: https://www.youtube.com/@Roboflow

  • Is supervised learning dead for computer vision?
    9 projects | news.ycombinator.com | 28 Oct 2023
    The places in which a vision model is deployed are different than that of a language model.

    A vision model may be deployed on cameras without an internet connection, with data retrieved later; a vision model may be used on camera streams in a factory; sports broadcasts on which you need low latency. In many cases, real-time -- or close to real-time -- performance is needed.

    Fine-tuned models can deliver the requisite performance for vision tasks with relatively low computational power compared to the LLM equivalent. The weights are small relative to LLM weights.

    LLMs are often deployed via API. This is practical for some vision applications (i.e. bulk processing), but for many use cases not being able to run on the edge is a dealbreaker.

    Foundation models certainly have a place.

    CLIP, for example, works fast, and may be used for a task like classification on videos. Where I see opportunity right now is in using foundation models to train fine-tuned models. The foundation model acts as an automatic labeling tool, then you can use that model to get your dataset. (Disclosure: I co-maintain a Python package that lets you do this, Autodistill -- https://github.com/autodistill/autodistill).

    SAM (segmentation), CLIP (embeddings, classification), Grounding DINO (zero-shot object detection) in particular have a myriad of use cases, one of which is automated labeling.

    I'm looking forward to seeing foundation models improve for all the opportunities that will bring!

  • Ask HN: Who is hiring? (October 2023)
    9 projects | news.ycombinator.com | 2 Oct 2023
  • Autodistill: A new way to create CV models
    6 projects | /r/developersIndia | 30 Sep 2023
    Autodistill
  • Show HN: Autodistill, automated image labeling with foundation vision models
    1 project | news.ycombinator.com | 6 Sep 2023
  • Show HN: Pip install inference, open source computer vision deployment
    4 projects | news.ycombinator.com | 23 Aug 2023
    Thanks for the suggestion! Definitely agree, we’ve seen that work extremely well for Supervision[1] and Autodistill, some of our other open source projects.

    There’s still a lot of polish like this we need to do; we’ve spent most of our effort cleaning up the code and documentation to prep for open sourcing the repo.

    Next step is improving the usability of the pip pathway (that interface was just added; the http server was all we had for internal use). Then we’re going to focus on improving the content and expanding the models it supports.

    [1] https://github.com/roboflow/supervision

    [2] https://github.com/autodistill/autodistill

  • Ask HN: Who is hiring? (August 2023)
    13 projects | news.ycombinator.com | 1 Aug 2023
    Roboflow | Multiple Roles | Full-time (Remote, SF, NYC) | https://roboflow.com/careers?ref=whoishiring0823

    Roboflow is the fastest way to use computer vision in production. We help developers give their software the sense of sight. Our end-to-end platform[1] provides tooling for image collection, annotation, dataset exploration and curation, training, and deployment.

    Over 250k engineers (including engineers from 2/3 Fortune 100 companies) build with Roboflow. We now host the largest collection of open source computer vision datasets and pre-trained models[2]. We are pushing forward the CV ecosystem with open source projects like Autodistill[3] and Supervision[4]. And we've built one of the most comprehensive resources for software engineers to learn to use computer vision with our popular blog[5] and YouTube channel[6].

    We have several openings available, but are primarily looking for strong technical generalists who want to help us democratize computer vision and like to wear many hats and have an outsized impact. Our engineering culture is built on a foundation of autonomy & we don't consider an engineer fully ramped until they can "choose their own loss function". At Roboflow, engineers aren't just responsible for building things but also for helping figure out what we should build next. We're builders & problem solvers; not just coders. (For this reason we also especially love hiring past and future founders.)

    We're currently hiring full-stack engineers for our ML and web platform teams, a web developer to bridge our product and marketing teams, several technical roles on the sales & field engineering teams, and our first applied machine learning researcher to help push forward the state of the art in computer vision.

    [1]: https://roboflow.com/?ref=whoishiring0823

    [2]: https://roboflow.com/universe?ref=whoishiring0823

    [3]: https://github.com/autodistill/autodistill

    [4]: https://github.com/roboflow/supervision

    [5]: https://blog.roboflow.com/?ref=whoishiring0823

    [6]: https://www.youtube.com/@Roboflow

  • AI That Teaches Other AI
    4 projects | news.ycombinator.com | 20 Jul 2023
    > Their SKILL tool involves a set of algorithms that make the process go much faster, they said, because the agents learn at the same time in parallel. Their research showed if 102 agents each learn one task and then share, the amount of time needed is reduced by a factor of 101.5 after accounting for the necessary communications and knowledge consolidation among agents.

    This is a really interesting idea. It's like the reverse of knowledge distillation (which I've been thinking about a lot[1]) where you have one giant model that knows a lot about a lot & you use that model to train smaller, faster models that know a lot about a little.

    Instead, you if you could train a lot of models that know a lot about a little (which is a lot less computationally intensive because the problem space is so confined) and combine them into a generalized model, that'd be hugely beneficial.

    Unfortunately, after a bit of digging into the paper & Github repo[2], this doesn't seem to be what's happening at all.

    > The code will learn 102 small and separte heads(either a linear head or a linear head with a task bias) for each tasks respectively in order. This step can be parallized on multiple GPUS with one task per GPU. The heads will be saved in the weight folder. After that, the code will learn a task mapper(Either using GMMC or Mahalanobis) to distinguish image task-wisely. Then, all images will be evaluated in the same time without a task label.

    So the knowledge isn't being combined (and the agents aren't learning from each other) into a generalized model. They're just training a bunch of independent models for specific tasks & adding a model-selection step that maps an image to the most relevant "expert". My guess is you could do the same thing using CLIP vectors as the routing method to supervised models trained on specific datasets (we found that datasets largely live in distinct regions of CLIP-space[3]).

    [1] https://github.com/autodistill/autodistill

    [2] https://github.com/gyhandy/Shared-Knowledge-Lifelong-Learnin...

    [3] https://www.rf100.org

  • Autodistill: Use foundation vision models to train smaller, supervised models
    1 project | news.ycombinator.com | 22 Jun 2023
  • Autodistill: use big slow foundation models to train small fast supervised models (r/MachineLearning)
    1 project | /r/datascienceproject | 10 Jun 2023

blackjack-basic-strategy

Posts with mentions or reviews of blackjack-basic-strategy. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-23.
  • Show HN: Pip install inference, open source computer vision deployment
    4 projects | news.ycombinator.com | 23 Aug 2023
    It’s an easy to use inference server for computer vision models.

    The end result is a Docker container that serves a standardized API as a microservice that your application uses to get predictions from computer vision models (though there is also a native Python interface).

    It’s backed by a bunch of component pieces:

    * a server (so you don’t have to reimplement things like image processing & prediction visualization on every project)

    * standardized APIs for computer vision tasks (so switching out the model weights and architecture can be done independently of your application code)

    * model architecture implementations (which implement the tensor parsing glue between images & predictions) for supervised models that you've fine-tuned to perform custom tasks

    * foundation model implementations (like CLIP & SAM) that tend to chain well with fine-tuned models

    * reusable utils to make adding support for new models easier

    * a model registry (so your code can be independent from your model weights & you don't have to re-build and re-deploy every time you want to iterate on your model weights)

    * data management integrations (so you can collect more images of edge cases to improve your dataset & model the more it sees in the wild)

    * ecosystem (there are tens of thousands of fine-tuned models shared by users that you can use off the shelf via Roboflow Universe[1])

    Additionally, since it's focused specifically on computer vision, it has specific CV-focused features (like direct camera stream input) and makes some different tradeoffs than other more general ML solutions (namely, optimized for small-fast models that run at the edge & need support for running on many different devices like NVIDIA Jetsons and Raspberry Pis in addition to beefy cloud servers).

    [1] https://universe.roboflow.com

  • Open discussion and useful links people trying to do Object Detection
    4 projects | /r/deeplearning | 18 Feb 2023
    * Most of the time I find Roboflow extremely handy, I used it to merge datasets, augmentate, read tutorials and that kind of thing. Basically you just create your dataset with roboflow and focus on other aspects.
  • TensorFlow Datasets (TFDS): a collection of ready-to-use datasets
    3 projects | news.ycombinator.com | 21 Dec 2022
    For computer vision, there are 100k+ open source classification, object detection, and segmentation datasets available on Roboflow Universe: https://universe.roboflow.com
  • Please suggest resources to learn how to work with pre-trained CV models
    2 projects | /r/computervision | 21 Nov 2022
    Solid website and app overall for learning more about computer vision, discovering datasets, and keeping up with advancements in the field: * https://roboflow.com/learn * https://universe.roboflow.com (datasets) | https://blog.roboflow.com/computer-vision-datasets-and-apis/ * https://blog.roboflow.com
  • Suggestion for identification problem with shipping labels?
    3 projects | /r/computervision | 1 Nov 2022
    If you're lacking training images, you can also use [Roboflow Universe](https://universe.roboflow.com) to obtain them (over 100 million labeled images available)
  • Ask HN: Who is hiring? (November 2022)
    20 projects | news.ycombinator.com | 1 Nov 2022
    Roboflow | Multiple Roles | Full-time (Remote) | https://roboflow.com/careers

    Roboflow is the fastest way to use computer vision in production. We help developers give their software the sense of sight. Our end-to-end platform[1] provides tooling for image collection, annotation, dataset exploration and curation, training, and deployment.

    Over 100k engineers (including engineers from 2/3 Fortune 100 companies) build with Roboflow. And we now host the largest collection[2] of open source computer vision datasets and pre-trained models[3].

    We have several openings available, but are primarily looking for strong technical generalists who want to help us democratize computer vision and like to wear many hats and have an outsized impact. (We especially love hiring past and future founders.)

    We're hiring 3 full-stack engineers this quarter and we're also looking for an infrastructure engineer with Elasticsearch experience.

    [1]: https://docs.roboflow.com

    [2]: https://blog.roboflow.com/computer-vision-datasets-and-apis/

    [3]: https://universe.roboflow.com

  • When annotating an image, if a collection of an entity changes the nature of the entity, do you label them collectively or separately?
    1 project | /r/computervision | 11 Oct 2022
    Based on what I do/use when I prepare models: A good framework for creating and improving this dataset faster is to use Roboflow Universe and search “flowers” and “bouquets of flowers” in the search bar (it’s like Google Images for CV Datasets). You can search images by subject, or metadata, and clone them directly into a free public workspace (they house up to 10k images without charge). * https://universe.roboflow.com/ * https://universe.roboflow.com/search?q=flowers * https://universe.roboflow.com/search?q=bouqets
  • Need help on finding an area where machine learning is applicable on day-to-day life but not implemented already
    1 project | /r/computervision | 25 Sep 2022
    Lots of ideas will come to mind if you look and search through open source datasets: https://universe.roboflow.com/
  • Ask HN: Any good self-hosted image recognition software?
    6 projects | news.ycombinator.com | 22 Sep 2022
  • SAAS for object detection?
    3 projects | /r/computervision | 21 Sep 2022
    Open source datasets: https://universe.roboflow.com/ Model training: https://docs.roboflow.com/train Model deployment: https://docs.roboflow.com/inference/hosted-api

What are some alternatives?

When comparing autodistill and blackjack-basic-strategy you can also consider the following projects:

anylabeling - Effortless AI-assisted data labeling with AI support from YOLO, Segment Anything, MobileSAM!!

uxp-photoshop-plugin-samples - UXP Plugin samples for Photoshop 22 and higher.

tabby - Self-hosted AI coding assistant

wallet - The official repository for the Valora mobile cryptocurrency wallet.

Shared-Knowledge-Lifelong-Learnin

process-google-dataset - Process Google Dataset is a tool to download and process images for neural networks from a Google Image Search using a Chrome extension and a simple Python code.

segment-geospatial - A Python package for segmenting geospatial data with the Segment Anything Model (SAM)

rollup-react-example - An example React application using Rollup with ES modules, dynamic imports, Service Workers, and Flow.

opentofu - OpenTofu lets you declaratively manage your cloud infrastructure.

edenai-javascript - The best AI engines in one API: vision, text, speech, translation, OCR, machine learning, etc. SDK and examples for JavaScript developers.

supervision - We write your reusable computer vision tools. 💜

Speed-Coding-Games-in-JavaScript - Games Repository from Speed Coding channel