Shared-Knowledge-Lifelong-Learnin

By gyhandy

Shared-Knowledge-Lifelong-Learnin Alternatives

Similar projects and alternatives to Shared-Knowledge-Lifelong-Learnin

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better Shared-Knowledge-Lifelong-Learnin alternative or higher similarity.

Shared-Knowledge-Lifelong-Learnin reviews and mentions

Posts with mentions or reviews of Shared-Knowledge-Lifelong-Learnin. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-20.
  • AI That Teaches Other AI
    4 projects | news.ycombinator.com | 20 Jul 2023
    > Their SKILL tool involves a set of algorithms that make the process go much faster, they said, because the agents learn at the same time in parallel. Their research showed if 102 agents each learn one task and then share, the amount of time needed is reduced by a factor of 101.5 after accounting for the necessary communications and knowledge consolidation among agents.

    This is a really interesting idea. It's like the reverse of knowledge distillation (which I've been thinking about a lot[1]) where you have one giant model that knows a lot about a lot & you use that model to train smaller, faster models that know a lot about a little.

    Instead, you if you could train a lot of models that know a lot about a little (which is a lot less computationally intensive because the problem space is so confined) and combine them into a generalized model, that'd be hugely beneficial.

    Unfortunately, after a bit of digging into the paper & Github repo[2], this doesn't seem to be what's happening at all.

    > The code will learn 102 small and separte heads(either a linear head or a linear head with a task bias) for each tasks respectively in order. This step can be parallized on multiple GPUS with one task per GPU. The heads will be saved in the weight folder. After that, the code will learn a task mapper(Either using GMMC or Mahalanobis) to distinguish image task-wisely. Then, all images will be evaluated in the same time without a task label.

    So the knowledge isn't being combined (and the agents aren't learning from each other) into a generalized model. They're just training a bunch of independent models for specific tasks & adding a model-selection step that maps an image to the most relevant "expert". My guess is you could do the same thing using CLIP vectors as the routing method to supervised models trained on specific datasets (we found that datasets largely live in distinct regions of CLIP-space[3]).

    [1] https://github.com/autodistill/autodistill

    [2] https://github.com/gyhandy/Shared-Knowledge-Lifelong-Learnin...

    [3] https://www.rf100.org

Stats

Basic Shared-Knowledge-Lifelong-Learnin repo stats
1
-
-
-

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com