captum
nx
captum | nx | |
---|---|---|
11 | 36 | |
4,612 | 2,491 | |
2.4% | 1.8% | |
8.6 | 9.3 | |
15 days ago | 6 days ago | |
Python | Elixir | |
BSD 3-clause "New" or "Revised" License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
captum
-
[D] [R] Research Problem about Weakly Supervised Learning for CT Image Semantic Segmentation
Most likely, NNs in general love shortcut learning (see Geirhos et al. 2020). In general, local explanations such as grad-cam are quite noisy, and sometimes even inconsistent (see Seo et al. 2018 ). Now, in my experience, I've seen that integrated gradients (see Sundararajan et al 2017) does a better job compared to Grad-CAM (also, add a noise tunnel), but this is only based on my limited experience. I would totally recommend using the implementations from the Captum library for loca explanations.
-
[D] Off-the-shelf image saliency scoring models?
Take a look at captum
-
Can you interrogate a machine learning model to find out why it gave certain predictions?
Sometimes. If explainable predictions are part of your business requirements, it's probably better not to rely entirely on black box models and instead design a system that gives you the information you need as part of it. If you end up using black box models, there are still methods that attempt to help attribute explanations to your prediction. Here's an example of a toolkit for attributing explanations post-hoc to black box model predictions: https://github.com/pytorch/captum
- [D] DL Practitioners, Do You Use Layer Visualization Tools s.a GradCam in Your Process?
-
What kind of explainability techniques exist for Reinforcement learning?
The straightforward way to interpret RL agent's decision is to use captum library.
-
[D] How do you choose which Black-Box Explainability method to use?
My use-case is research-oriented. I work on Explainable AI. Generally, the best package I've come across to compute attributions in Pytorch Captum. If your object detector is in PyTorch, you can perhaps build it in.
-
PyTorch vs. TensorFlow in 2022
Do any JAX experts know if there is an equivalent to https://captum.ai/ - a model interpretability library for pytorch?
In particular i want to be able to measure feature importance on both inputs and internal layers on a sample by sample basis. This is the only thing currently holding me back from using JAX right now.
Alternatively a simle to read/understand/port implementation of DeepLIFT would work too.
thanks
-
DeepLIFT or other explainable api implementations for JAX (like captum for pytorch)?
I'm interested to use JAX but am having a hard time finding anything similar to captum for the pytorch world.
- how to extract features from a (CNN) convolutional network having raw data with (XAI) explainable techinques?
-
Looking for help regarding explainable AI
Do you actually want to implement something? There are decent explainability libraries now, e.g., [AIX360](https://aix360.mybluemix.net/), [InterpretML](https://interpret.ml/), or [captum](https://captum.ai/). Pytorch + maybe pytorch lightning + captum might be the quickest way to actually implement something like an explainable neural net yourself. Do the standard tutorials for each of them and watch a few YT videos (or follow a coursera course or something like that) about how these things work in theory and practice, and you'll get up to speed relatively quickly. You will not be able to do useful work in ML without actually learning the ropes.
nx
-
Unpacking Elixir: Concurrency
Does nx not work for you? https://github.com/elixir-nx/nx/tree/main/nx#readme
-
A LiveView Is a Process
It is historically not great at number computing. This is being addressed by a relatively new project called Nx. https://github.com/elixir-nx/nx
It is not the right choice for CPU intensive tasks like graphics, HFT, etc. Some companies have used Rust to write native extensions for those kinds of problems. https://discord.com/blog/using-rust-to-scale-elixir-for-11-m...
- How does Elixir stack up to Julia in the future of writing machine-learning software?
-
Data wrangling in Elixir with Explorer, the power of Rust, the elegance of R
José from the Livebook team. I don't think I can make a pitch because I have limited Python/R experience to use as reference.
My suggestion is for you to give it a try for a day or two and see what you think. I am pretty sure you will find weak spots and I would be very happy to hear any feedback you may have. You can find my email on my GitHub profile (same username).
In general we have grown a lot since the Numerical Elixir effort started two years ago. Here are the main building blocks:
* Nx (https://github.com/elixir-nx/nx/tree/main/nx#readme): equivalent to Numpy, deeply inspired by JAX. Runs on both CPU and GPU via Google XLA (also used by JAX/Tensorflow) and supports tensor serving out of the box
* Axon (https://github.com/elixir-nx/axon): Nx-powered neural networks
* Bumblebee (https://github.com/elixir-nx/bumblebee): Equivalent to HuggingFace Transformers. We have implemented several models and that's what powers the Machine Learning integration in Livebook (see the announcement for more info: https://news.livebook.dev/announcing-bumblebee-gpt2-stable-d...)
* Explorer (https://github.com/elixir-nx/explorer): Series and DataFrames, as per this thread.
* Scholar (https://github.com/elixir-nx/scholar): Nx-based traditional Machine Learning. This one is the most recent effort of them all. We are treading the same path as scikit-learn but quite early on. However, because we are built on Nx, everything is derivable, GPU-ready, distributable, etc.
Regarding visualization, we have "smart cells" for VegaLite and MapLibre, similar to how we did "Data Transformations" in the video above. They help you get started with your visualizations and you can jump deep into the code if necessary.
I hope this helps!
-
Elixir and Rust is a good mix
> I guess, why not use Rust entirely instead of as a FFI into Elixir or other backend language?
Because Rust brings none of the benefits of the BEAM ecosystem to the table.
I was an early Elixir adopter, not working currently as an Elixir developer, but I have deployed one of the largest Elixir applications for a private company in my country.
I know it has limits, but the language itself is only a small part of the whole.
Take ML, Jose Valim and Sean Moriarity have studied the problem, made a plan to tackle it and started solving it piece by piece [1] in a tightly integrated manner, it feels natural, as if Elixir always had those capabilities in a way that no other language does and to put the icing on the cake the community released Livebook [2] to interactively explore code and use the new tools in the simplest way possible, something that Python notebooks only dream of being capable of, after a decade of progress
That's not to say that Elixir is superior as a language, but that the ecosystem is flourishing and the community is able to extract the 100% of the benefits from the tools and create new marvellously crafted ones, that push the limits forward every time, in such a simple manner, that it looks like magic.
And going back to Rust, you can write Rust if you need speed or for whatever reason you feel it's the right tool for the job, it's totally integrated [3][4], again in a way that many other languages can only dream of, and it's in fact the reason I've learned Rust in the first place.
The opposite is not true, if you write Rust, you write Rust, and that's it. You can't take advantage of the many features the BEAM offers, OTP, hot code reloading, full inspection of running systems, distribution, scalability, fault tolerance, soft real time etc. etc. etc.
But of course if you don't see any advantage in them, it means you probably don't need them (one other option is that you still don't know you want them :] ). In that case Rust is as good as any other language, but for a backend, even though I gently despise it, Java (or Kotlin) might be a better option.
[1] https://github.com/elixir-nx/nx https://github.com/elixir-nx/axon
[2] https://livebook.dev/
[3] https://github.com/rusterlium/rustler
[4] https://dashbit.co/blog/rustler-precompiled
-
Distributed² Machine Learning Notebooks with Elixir and Livebook
(including docs and tests!): https://github.com/elixir-nx/nx/pull/1090
I'll be glad to answer questions about Nx or anything from Livebook's launch week!
-
Why Python keeps growing, explained
I think that experiment is taking shape with Elixir:
https://github.com/elixir-nx/nx
-
Does Nx use a Metal in the Backend ?
However the issue here at Nx https://github.com/elixir-nx/nx/issues/490 is already closed.
-
Do I need to use Elixir from Go perspective?
Outside of that, Elixir can be used for data pipelines, audio-video processing, and it is making inroads on Machine Learning with projects like Livebook, Nx, and Bumblebee.
- Elixir – HUGE Release Coming Soon
What are some alternatives?
shap - A game theoretic approach to explain the output of any machine learning model.
Elixir - Elixir is a dynamic, functional language for building scalable and maintainable applications
DALEX - moDel Agnostic Language for Exploration and eXplanation
gleam - ⭐️ A friendly language for building type-safe, scalable systems!
lucid - A collection of infrastructure and tools for research in neural network interpretability.
axon - Nx-powered Neural Networks
flax - Flax is a neural network library for JAX that is designed for flexibility.
dplyr - dplyr: A grammar of data manipulation
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
explorer - An open source block explorer
WeightWatcher - The WeightWatcher tool for predicting the accuracy of Deep Neural Networks
fib - Performance Benchmark of top Github languages