SaaSHub helps you find the best software and product alternatives Learn more →
Top 5 Python interpretable-ml Projects
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Project mention: Potential of the Julia programming language for high energy physics computing | news.ycombinator.com | 2023-12-04> Yes, julia can be called from other languages rather easily
This seems false to me. StaticCompiler.jl [1] puts in their limitations that "GC-tracked allocations and global variables do not work with compile_executable or compile_shlib. This has some interesting consequences, including that all functions within the function you want to compile must either be inlined or return only native types (otherwise Julia would have to allocate a place to put the results, which will fail)." PackageCompiler.jl [2] has the same limitations if I'm not mistaken. So then you have to fall back to distributing the Julia "binary" with a full Julia runtime, which is pretty heavy. There are some packages which do this. For example, PySR [3] does this.
There is some word going around though that there is an even better static compiler in the making, but as long as that one is not publicly available I'd say that Julia cannot easily be called from other languages.
[1]: https://github.com/tshort/StaticCompiler.jl
[2]: https://github.com/JuliaLang/PackageCompiler.jl
[3]: https://github.com/MilesCranmer/PySR
Project mention: Show HN: Open-Sourcing Google's Lattice Models in PyTorch | news.ycombinator.com | 2023-11-15
Python interpretable-ml related posts
- [D] [R] Research Problem about Weakly Supervised Learning for CT Image Semantic Segmentation
- [D] Off-the-shelf image saliency scoring models?
- Can you interrogate a machine learning model to find out why it gave certain predictions?
- What kind of explainability techniques exist for Reinforcement learning?
- [D] How do you choose which Black-Box Explainability method to use?
- DeepLIFT or other explainable api implementations for JAX (like captum for pytorch)?
- how to extract features from a (CNN) convolutional network having raw data with (XAI) explainable techinques?
-
A note from our sponsor - SaaSHub
www.saashub.com | 28 Apr 2024
Index
What are some of the best open-source interpretable-ml projects in Python? This list will help you:
Project | Stars | |
---|---|---|
1 | captum | 4,568 |
2 | PySR | 1,882 |
3 | OSDT | 94 |
4 | pytorch-lattice | 25 |
5 | XAI | 18 |
Sponsored