coremltools
hummingbird
Our great sponsors
coremltools | hummingbird | |
---|---|---|
11 | 9 | |
4,063 | 3,302 | |
2.9% | 0.7% | |
8.7 | 7.1 | |
7 days ago | 10 days ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
coremltools
- CoreML commit from Apple mentions iOS17 exclusive features
-
Lisa Su Saved AMD. Now She Wants Nvidia's AI Crown
Instead of trying to integrate the whole stack of, say, pytorch, Apple's primary approach has been converting models to work with Apple's stack.
https://github.com/apple/coremltools
Clearly no one is going to be doing training or even fine tuning on Apple hardware at any scale (it competes at the low end, but at scale you invariably will be using nvidia hardware), but once you have a decent model it's a robust way of using it on Apple devices.
-
Stable Diffusion for M1 iPad
There is one guy who was able to run it on iOS. See this thread for more information. Basically, the idea is to convert torch models to CoreMl. Only the CLIP tokenizer's implementation is currently missing. I guess this guy will keep modifications private, but he is trying to optimize model for lower RAM requirements.
-
MacBook Pro 14” M1 Pro (worth buying for programming)
Afaik (correct me if I’m wrong) both PyTorch and tensorflow only use the gpu when training and not the neural engine. I think the neural engines can be used for inference if the model is in the CoreML format (https://github.com/apple/coremltools)
- Is it possible to convert a yolov5 model to a CoreML/.mlmodel to work in an IOS app?
-
ML model conversion
CoreML Tools
-
Supreme Court, in a 6–2 ruling in Google v. Oracle, concludes that Google’s use of Java API was a fair use of that material
And Python.
-
Apple’s New M1 Chip is a Machine Learning Beast
There's literally an Apple provided tool, called [coremltools[(https://github.com/apple/coremltools) to convert many common PyTorch and TensorFlow models to CoreML.
hummingbird
- Treebomination: Convert a scikit-learn decision tree into a Keras model
-
[D] GPU-enabled scikit-learn
If are interested in just predictions you can try Hummingbird. It is part of the PyTorch ecosystem. We get already trained scikit-learn models and translate them into PyTorch models. From them you can run your model on any hardware support by PyTorch, export it into TVM, ONNX, etc. Performance on hardware acceleration is quite good (orders of magnitude better than scikit-learn is some cases)
-
Machine Learning with PyTorch and Scikit-Learn – The *New* Python ML Book
I think Rapids AI's cuML tried to go into this direction (essentially scikit-learn on the GPU): https://docs.rapids.ai/api/cuml/stable/api.html#logistic-reg.... For some reason it never took really off though.
Btw., going on a tangent, you might like Hummingbird (https://github.com/microsoft/hummingbird). It allows you trained scikit-learn tree-based models to PyTorch. I watched the SciPy talk last year, and it's a super smart & elegant idea.
-
Export and run models with ONNX
ONNX opens an avenue for direct inference using a number of languages and platforms. For example, a model could be run directly on Android to limit data sent to a third party service. ONNX is an exciting development with a lot of promise. Microsoft has also released Hummingbird which enables exporting traditional models (sklearn, decision trees, logistical regression..) to ONNX.
-
Supreme Court, in a 6–2 ruling in Google v. Oracle, concludes that Google’s use of Java API was a fair use of that material
And Python.
-
[D] Here are 3 ways to Speed Up Scikit-Learn - Any suggestions?
For inference, you can convert your models to other formats that support GPU acceleration. See Hummingbird https://github.com/microsoft/hummingbird
-
[D] Microsoft library, Hummingbird, compiles trained ML models into tensor computation for faster inference.
The surprising thing is that Hummingbird can be faster than the GPU implementation of LightGBM (and XGBoost) if you use tensor compilers such as TVM. [The paper](https://www.usenix.org/conference/osdi20/presentation/nakandala) describes our findings. We have also open sourced the [benchmark code](https://github.com/microsoft/hummingbird/tree/main/benchmarks) so you try yourself!
-
I learned about Microsoft's Hummingbird library today. 1000x performance??
I took their sample code from Github and tweaked it to spit out times for each model's prediction, as well as increase the number of rows to 5 million. I used Google's Colab and selected GPU for my hardware accelerator. This gives an option to run code on GPU, not that all computations will happen on the GPU.
What are some alternatives?
RobustVideoMatting - Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!
onnx - Open standard for machine learning interoperability
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
swift - The Swift Programming Language
tensorflow_macos - TensorFlow for macOS 11.0+ accelerated using Apple's ML Compute framework.
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
3d-model-convert-to-gltf - Convert 3d model (STL/IGES/STEP/OBJ/FBX) to gltf and compression
cuml - cuML - RAPIDS Machine Learning Library
MMdnn - MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.
docker - Docker - the open-source application container engine
password-manager-resources - A place for creators and users of password managers to collaborate on resources to make password management better.
chemprop - Message Passing Neural Networks for Molecule Property Prediction