autodistill VS LoRA

Compare autodistill vs LoRA and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
autodistill LoRA
13 34
1,552 9,172
5.3% 4.7%
9.2 4.7
about 1 month ago 13 days ago
Python Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

autodistill

Posts with mentions or reviews of autodistill. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-01.
  • Ask HN: Who is hiring? (February 2024)
    18 projects | news.ycombinator.com | 1 Feb 2024
    Roboflow | Open Source Software Engineer, Web Designer / Developer, and more. | Full-time (Remote, SF, NYC) | https://roboflow.com/careers?ref=whoishiring0224

    Roboflow is the fastest way to use computer vision in production. We help developers give their software the sense of sight. Our end-to-end platform[1] provides tooling for image collection, annotation, dataset exploration and curation, training, and deployment.

    Over 250k engineers (including engineers from 2/3 Fortune 100 companies) build with Roboflow. We now host the largest collection of open source computer vision datasets and pre-trained models[2]. We are pushing forward the CV ecosystem with open source projects like Autodistill[3] and Supervision[4]. And we've built one of the most comprehensive resources for software engineers to learn to use computer vision with our popular blog[5] and YouTube channel[6].

    We have several openings available but are primarily looking for strong technical generalists who want to help us democratize computer vision and like to wear many hats and have an outsized impact. Our engineering culture is built on a foundation of autonomy & we don't consider an engineer fully ramped until they can "choose their own loss function". At Roboflow, engineers aren't just responsible for building things but also for helping us figure out what we should build next. We're builders & problem solvers; not just coders. (For this reason we also especially love hiring past and future founders.)

    We're currently hiring full-stack engineers for our ML and web platform teams, a web developer to bridge our product and marketing teams, several technical roles on the sales & field engineering teams, and our first applied machine learning researcher to help push forward the state of the art in computer vision.

    [1]: https://roboflow.com/?ref=whoishiring0224

    [2]: https://roboflow.com/universe?ref=whoishiring0224

    [3]: https://github.com/autodistill/autodistill

    [4]: https://github.com/roboflow/supervision

    [5]: https://blog.roboflow.com/?ref=whoishiring0224

    [6]: https://www.youtube.com/@Roboflow

  • Is supervised learning dead for computer vision?
    9 projects | news.ycombinator.com | 28 Oct 2023
    The places in which a vision model is deployed are different than that of a language model.

    A vision model may be deployed on cameras without an internet connection, with data retrieved later; a vision model may be used on camera streams in a factory; sports broadcasts on which you need low latency. In many cases, real-time -- or close to real-time -- performance is needed.

    Fine-tuned models can deliver the requisite performance for vision tasks with relatively low computational power compared to the LLM equivalent. The weights are small relative to LLM weights.

    LLMs are often deployed via API. This is practical for some vision applications (i.e. bulk processing), but for many use cases not being able to run on the edge is a dealbreaker.

    Foundation models certainly have a place.

    CLIP, for example, works fast, and may be used for a task like classification on videos. Where I see opportunity right now is in using foundation models to train fine-tuned models. The foundation model acts as an automatic labeling tool, then you can use that model to get your dataset. (Disclosure: I co-maintain a Python package that lets you do this, Autodistill -- https://github.com/autodistill/autodistill).

    SAM (segmentation), CLIP (embeddings, classification), Grounding DINO (zero-shot object detection) in particular have a myriad of use cases, one of which is automated labeling.

    I'm looking forward to seeing foundation models improve for all the opportunities that will bring!

  • Ask HN: Who is hiring? (October 2023)
    9 projects | news.ycombinator.com | 2 Oct 2023
  • Autodistill: A new way to create CV models
    6 projects | /r/developersIndia | 30 Sep 2023
    Autodistill
  • Show HN: Autodistill, automated image labeling with foundation vision models
    1 project | news.ycombinator.com | 6 Sep 2023
  • Show HN: Pip install inference, open source computer vision deployment
    4 projects | news.ycombinator.com | 23 Aug 2023
    Thanks for the suggestion! Definitely agree, we’ve seen that work extremely well for Supervision[1] and Autodistill, some of our other open source projects.

    There’s still a lot of polish like this we need to do; we’ve spent most of our effort cleaning up the code and documentation to prep for open sourcing the repo.

    Next step is improving the usability of the pip pathway (that interface was just added; the http server was all we had for internal use). Then we’re going to focus on improving the content and expanding the models it supports.

    [1] https://github.com/roboflow/supervision

    [2] https://github.com/autodistill/autodistill

  • Ask HN: Who is hiring? (August 2023)
    13 projects | news.ycombinator.com | 1 Aug 2023
    Roboflow | Multiple Roles | Full-time (Remote, SF, NYC) | https://roboflow.com/careers?ref=whoishiring0823

    Roboflow is the fastest way to use computer vision in production. We help developers give their software the sense of sight. Our end-to-end platform[1] provides tooling for image collection, annotation, dataset exploration and curation, training, and deployment.

    Over 250k engineers (including engineers from 2/3 Fortune 100 companies) build with Roboflow. We now host the largest collection of open source computer vision datasets and pre-trained models[2]. We are pushing forward the CV ecosystem with open source projects like Autodistill[3] and Supervision[4]. And we've built one of the most comprehensive resources for software engineers to learn to use computer vision with our popular blog[5] and YouTube channel[6].

    We have several openings available, but are primarily looking for strong technical generalists who want to help us democratize computer vision and like to wear many hats and have an outsized impact. Our engineering culture is built on a foundation of autonomy & we don't consider an engineer fully ramped until they can "choose their own loss function". At Roboflow, engineers aren't just responsible for building things but also for helping figure out what we should build next. We're builders & problem solvers; not just coders. (For this reason we also especially love hiring past and future founders.)

    We're currently hiring full-stack engineers for our ML and web platform teams, a web developer to bridge our product and marketing teams, several technical roles on the sales & field engineering teams, and our first applied machine learning researcher to help push forward the state of the art in computer vision.

    [1]: https://roboflow.com/?ref=whoishiring0823

    [2]: https://roboflow.com/universe?ref=whoishiring0823

    [3]: https://github.com/autodistill/autodistill

    [4]: https://github.com/roboflow/supervision

    [5]: https://blog.roboflow.com/?ref=whoishiring0823

    [6]: https://www.youtube.com/@Roboflow

  • AI That Teaches Other AI
    4 projects | news.ycombinator.com | 20 Jul 2023
    > Their SKILL tool involves a set of algorithms that make the process go much faster, they said, because the agents learn at the same time in parallel. Their research showed if 102 agents each learn one task and then share, the amount of time needed is reduced by a factor of 101.5 after accounting for the necessary communications and knowledge consolidation among agents.

    This is a really interesting idea. It's like the reverse of knowledge distillation (which I've been thinking about a lot[1]) where you have one giant model that knows a lot about a lot & you use that model to train smaller, faster models that know a lot about a little.

    Instead, you if you could train a lot of models that know a lot about a little (which is a lot less computationally intensive because the problem space is so confined) and combine them into a generalized model, that'd be hugely beneficial.

    Unfortunately, after a bit of digging into the paper & Github repo[2], this doesn't seem to be what's happening at all.

    > The code will learn 102 small and separte heads(either a linear head or a linear head with a task bias) for each tasks respectively in order. This step can be parallized on multiple GPUS with one task per GPU. The heads will be saved in the weight folder. After that, the code will learn a task mapper(Either using GMMC or Mahalanobis) to distinguish image task-wisely. Then, all images will be evaluated in the same time without a task label.

    So the knowledge isn't being combined (and the agents aren't learning from each other) into a generalized model. They're just training a bunch of independent models for specific tasks & adding a model-selection step that maps an image to the most relevant "expert". My guess is you could do the same thing using CLIP vectors as the routing method to supervised models trained on specific datasets (we found that datasets largely live in distinct regions of CLIP-space[3]).

    [1] https://github.com/autodistill/autodistill

    [2] https://github.com/gyhandy/Shared-Knowledge-Lifelong-Learnin...

    [3] https://www.rf100.org

  • Autodistill: Use foundation vision models to train smaller, supervised models
    1 project | news.ycombinator.com | 22 Jun 2023
  • Autodistill: use big slow foundation models to train small fast supervised models (r/MachineLearning)
    1 project | /r/datascienceproject | 10 Jun 2023

LoRA

Posts with mentions or reviews of LoRA. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-08.
  • DECT NR+: A technical dive into non-cellular 5G
    1 project | news.ycombinator.com | 2 Apr 2024
    This seems to be an order of magnitude better than LoRa (https://lora-alliance.org/ not https://arxiv.org/abs/2106.09685). LoRa doesn't have all the features this one does like OFDM, TDM, FDM, and HARQ. I didn't know there's spectrum dedicated for DECT use.
  • Training LLMs Taking Too Much Time? Technique you need to know to train it faster
    1 project | dev.to | 3 Mar 2024
    So to solve this, we tried researching into some optimization techniques and we found LoRA, Which stands for Low-Rank Adaptation of Large Language Models.
  • OpenAI employee: GPT-4.5 rumor was a hallucination
    1 project | news.ycombinator.com | 17 Dec 2023
    > Anyone have any ideas / knowledge on how they deploy little incremental fixes to exploited jailbreaks, etc?

    LoRa[1] would be my guess.

    For detailed explanation I recommend the paper. But the short explanation is that it is a trick which lets you train a smaller, lower dimensional model which when you add to the original model it gets you the result you want.

    1: https://arxiv.org/abs/2106.09685

  • Can a LoRa be used on models other than Stable Diffusion?
    2 projects | /r/StableDiffusion | 8 Dec 2023
    LoRA was initially developed for large language models, https://arxiv.org/abs/2106.09685 (2021). It was later that people discovered that it worked REALLY well for diffusion models.
  • StyleTTS2 – open-source Eleven Labs quality Text To Speech
    10 projects | news.ycombinator.com | 19 Nov 2023
    Curious if we'll see a Civitai-style LoRA[1] marketplace for text-to-speech models.

    1 = https://github.com/microsoft/LoRA

  • Andreessen Horowitz Invests in Civitai, Which Profits from Nonconsensual AI Porn
    1 project | news.ycombinator.com | 14 Nov 2023
    From https://arxiv.org/abs/2106.09685:

    > LoRA: Low-Rank Adaptation of Large Language Models

    > An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency.

  • Is supervised learning dead for computer vision?
    9 projects | news.ycombinator.com | 28 Oct 2023
    Yes, your understanding is correct. However, instead of adding a head on top of the network, most fine-tuning is currently done with LoRA (https://github.com/microsoft/LoRA). This introduces low-rank matrices between different layers of your models, those are then trained using your training data while the rest of the models' weights are frozen.
  • Run LLMs at home, BitTorrent‑style
    10 projects | news.ycombinator.com | 17 Sep 2023
    Somewhat yes. See "LoRA": https://arxiv.org/abs/2106.09685

    They're not composable in the sense that you can take these adaptation layers and arbitrarily combine them, but training different models while sharing a common base of weights is a solved problem.

  • New LoRa RF distance record: 1336 km / 830 mi
    1 project | news.ycombinator.com | 7 Sep 2023
    With all the naive AI zealotry on HN can you really fault me?

    They're referring to this:

    https://arxiv.org/abs/2106.09685

  • Open-source Fine-Tuning on Codebase with Refact
    2 projects | dev.to | 5 Sep 2023
    It's possible to fine-tune all parameters (called "full fine-tune"), but recently PEFT methods became popular. PEFT stands for Parameter-Efficient Fine-Tuning. There are several methods available, the most popular so far is LoRA (2106.09685) that can train less than 1% of the original weights. LoRA has one important parameter -- tensor size, called lora_r. It defines how much information LoRA can add to the network. If your codebase is small, the fine-tuning process will see the same data over and over again, many times in a loop. We found that for a smaller codebase small LoRA tensors work best because it won't overfit as much -- the tensors just don't have the capacity to fit the limited training set exactly. As the codebase gets bigger, tensors should become bigger as well. We also unfreeze token embeddings at a certain codebase size. To pick all the parameters automatically, we have developed a heuristic that calculates a score based on the source files it sees. This score is then used to determine the appropriate LoRA size, number of finetuning steps, and other parameters. We have tested this heuristic on several beta test clients, small codebases of several files, and large codebases like the Linux kernel (consisting of about 50,000 useful source files). If the heuristic doesn't work for you for whatever reason, you can set all the parameters yourself.

What are some alternatives?

When comparing autodistill and LoRA you can also consider the following projects:

anylabeling - Effortless AI-assisted data labeling with AI support from YOLO, Segment Anything, MobileSAM!!

LyCORIS - Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.

tabby - Self-hosted AI coding assistant

ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.

Shared-Knowledge-Lifelong-Learnin

ControlNet - Let us control diffusion models!

segment-geospatial - A Python package for segmenting geospatial data with the Segment Anything Model (SAM)

peft - 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

opentofu - OpenTofu lets you declaratively manage your cloud infrastructure.

alpaca-lora - Instruct-tune LLaMA on consumer hardware

supervision - We write your reusable computer vision tools. 💜

LLaMA-Adapter - [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters