keras-cv VS i6_experiments

Compare keras-cv vs i6_experiments and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
keras-cv i6_experiments
4 1
946 6
3.5% -
9.3 10.0
3 days ago 4 days ago
Python Python
GNU General Public License v3.0 or later Mozilla Public License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

keras-cv

Posts with mentions or reviews of keras-cv. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-11.

i6_experiments

Posts with mentions or reviews of i6_experiments. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-11.
  • Keras Core: Keras for TensorFlow, Jax, and PyTorch
    5 projects | news.ycombinator.com | 11 Jul 2023
    That looks very interesting.

    I actually have developed (and am developing) sth very similar, what we call the RETURNN frontend, a new frontend + new backends for our RETURNN framework. The new frontend is supporting very similar Python code to define models as you see in PyTorch or Keras, i.e. a core Tensor class, a base Module class you can derive, a Parameter class, and then a core functional API to perform all the computations. That supports multiple backends, currently mostly TensorFlow (graph-based) and PyTorch, but JAX was something I also planned. Some details here: https://github.com/rwth-i6/returnn/issues/1120

    (Note that we went a bit further ahead and made named dimensions a core principle of the framework.)

    (Example beam search implementation: https://github.com/rwth-i6/i6_experiments/blob/14b66c4dc74c0...)

    One difficulty I found was how design the API in a way that works well both for eager-mode frameworks (PyTorch, TF eager-mode) and graph-based frameworks (TF graph-mode, JAX). That mostly involves everything where there is some state, or sth code which should not just execute in the inner training loop but e.g. for initialization only, or after each epoch, or whatever. So for example:

    - Parameter initialization.

    - Anything involving buffers, e.g. batch normalization.

    - Other custom training loops? Or e.g. an outer loop and an inner loop (e.g. like GAN training)?

    - How to implement sth like weight normalization? In PyTorch, the module.param is renamed, and then there is a pre-forward hook, which on-the-fly calculates module.param for each call for forward. So, just following the same logic for both eager-mode and graph-mode?

    - How to deal with control flow context, accessing values outside the loop which came from inside, etc. Those things are naturally possible eager-mode, where you would get the most recent value, and where there is no real control flow context.

    - Device logic: Have device defined explicitly for each tensor (like PyTorch), or automatically eagerly move tensors to the GPU (like TensorFlow)? Moving from one device to another (or CPU) is automatic or must be explicit?

    I see that you have keras_core.callbacks.LambdaCallback which is maybe similar, but can you effectively update the logic of the module in there?

What are some alternatives?

When comparing keras-cv and i6_experiments you can also consider the following projects:

stable-diffusion-tensorflow - Stable Diffusion in TensorFlow / Keras

keras-nlp - Modular Natural Language Processing workflows with Keras

keras-core - A multi-backend implementation of the Keras API, with support for TensorFlow, JAX, and PyTorch.

returnn - The RWTH extensible training framework for universal recurrent neural networks