deepsparse
tensorflow
deepsparse | tensorflow | |
---|---|---|
21 | 223 | |
2,878 | 182,575 | |
1.5% | 0.5% | |
9.5 | 10.0 | |
about 9 hours ago | about 2 hours ago | |
Python | C++ | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
deepsparse
-
Fast Llama 2 on CPUs with Sparse Fine-Tuning and DeepSparse
Interesting company. Yannic Kilcher interviewed Nir Shavit last year and they went into some depth: https://www.youtube.com/watch?v=0PAiQ1jTN5k DeepSparse is on GitHub: https://github.com/neuralmagic/deepsparse
-
The future of quantization techniques in deep learning.
sparsity https://github.com/neuralmagic/deepsparse
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 1), what is the easiest way to speed up inference (assume only PyTorch and primarily GPU but also some CPU)? I have been using ONNX and Torchscript but there is a bit of a learning curve and sometimes it can be tricky to get the model to actually work. Is there anything else worth trying? I am enthused by things like TorchDynamo (although I have not tested it extensively) due to its apparent ease of use. I also saw the post yesterday about Kernl using (OpenAI) Triton kernels to speed up transformer models which also looks interesting. Are things like SageMaker Neo or NeuralMagic worth trying? My only reservation with some of these is they still seem to be pretty model/architecture specific. I am a little reluctant to put much time into these unless I know others have had some success first.
-
[D] Most efficient open source language model ?
You should look into deepsparse, they are working on delivering GPU level performance on consumer CPUs with some great results: https://github.com/neuralmagic/deepsparse. There is a great interview with the founder, Nir Shavit here: https://piped.kavin.rocks/watch?v=0PAiQ1jTN5k
-
[R] New sparsity research (oBERT) enabled 175X increase in CPU performance for MLPerf submission
Utilizing the oBERT research we published at Neural Magic and some further iteration, we’ve enabled an increase in NLP performance of 175X while retaining 99% accuracy on the question-answering task in MLPerf. A combination of distillation, layer dropping, quantization, and unstructured pruning with oBERT enabled these large performance gains through the DeepSparse Engine. All of our contributions and research are open-sourced or free to use. Read through the oBERT paper on arxiv, try out the research in SparseML, and dive into the writeup to learn more about how we achieved these impressive results and utilize them for your own use cases!
-
An open-source library for optimizing deep learning inference. (1) You select the target optimization, (2) nebullvm searches for the best optimization techniques for your model-hardware configuration, and then (3) serves an optimized model that runs much faster in inference
Open-source projects leveraged by nebullvm include OpenVINO, TensorRT, Intel Neural Compressor, SparseML and DeepSparse, Apache TVM, ONNX Runtime, TFlite and XLA. A huge thank you to the open-source community for developing and maintaining these amazing projects.
-
[R] BERT-Large: Prune Once for DistilBERT Inference Performance
BERT-Large (345 million parameters) is now faster than the much smaller DistilBERT (66 million parameters) all while retaining the accuracy of the much larger BERT-Large model! We made this possible with Intel Labs by applying cutting-edge sparsification and quantization research from their Prune Once For All paper and utilizing it in the DeepSparse engine. It makes BERT-Large 12x smaller while delivering 8x latency speedup on commodity CPUs. We open-sourced the research in SparseML; run through the overview here and give it a try!
-
[R] How well do sparse ImageNet models transfer? Prune once and deploy anywhere for inference performance speedups! (arxiv link in comments)
And benchmark/deploy with 8X better performance in DeepSparse!
- Sparseserver.ui – test the performance of Sparse Transformers
-
[P] SparseServer.UI : A UI to test performance of Sparse Transformers
Hi _Arsenie, this runs the deepsparse.server command for multiple models. and btw, we recently updated the READMEs for the Deepsparse Engine https://github.com/neuralmagic/deepsparse
tensorflow
-
Side Quest Devblog #1: These Fakes are getting Deep
# L2-normalize the encoding tensors image_encoding = tf.math.l2_normalize(image_encoding, axis=1) audio_encoding = tf.math.l2_normalize(audio_encoding, axis=1) # Find euclidean distance between image_encoding and audio_encoding # Essentially trying to detect if the face is saying the audio # Will return nan without the 1e-12 offset due to https://github.com/tensorflow/tensorflow/issues/12071 d = tf.norm((image_encoding - audio_encoding) + 1e-12, ord='euclidean', axis=1, keepdims=True) discriminator = keras.Model(inputs=[image_input, audio_input], outputs=[d], name="discriminator")
-
Google lays off its Python team
[3]: https://github.com/tensorflow/tensorflow/graphs/contributors
- TensorFlow-metal on Apple Mac is junk for training
-
🔥🚀 Top 10 Open-Source Must-Have Tools for Crafting Your Own Chatbot 🤖💬
To get up to speed with TensorFlow, check their quickstart Support TensorFlow on GitHub ⭐
- One .gitignore to rule them all
-
10 Github repositories to achieve Python mastery
Explore here.
-
GitHub and Developer Ecosystem Control
Part of the major userbase pull in GitHub revolves around hosting a considerable number of popular projects including Angular, React, Kubernetes, cpython, Ruby, tensorflow, and well even the software that powers this site Forem.
-
Non-determinism in GPT-4 is caused by Sparse MoE
Right but that's not an inherent GPU determinism issue. It's a software issue.
https://github.com/tensorflow/tensorflow/issues/3103#issueco... is correct that it's not necessary, it's a choice.
Your line of reasoning appears to be "GPUs are inherently non-deterministic don't be quick to judge someone's code" which as far as I can tell is dead wrong.
Admittedly there are some cases and instructions that may result in non-determinism but they are inherently necessary. The author should thinking carefully before introducing non-determinism. There are many scenarios where it is irrelevant, but ultimately the issue we are discussing here isn't the GPU's fault.
-
Can someone explain how keras code gets into the Tensorflow package?
and things like y = layers.ELU()(y) work as expected. I wanted to see a list of the available layers so I went to the Tensorflow GitHub repository and to the keras directory. There's a warning in that directory that says:
-
Is it even possible to design a ML model without using Python or MATLAB? Like using C++, C or Java?
Exactly what language do you think TensorFlow is written in? :)
What are some alternatives?
NudeNet - Neural Nets for Nudity Detection and Censoring
PaddlePaddle - PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
yolov5 - YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Prophet - Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth.
openvino - OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
model-optimization - A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
LightGBM - A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.
sparseml - Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
scikit-learn - scikit-learn: machine learning in Python
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
LightFM - A Python implementation of LightFM, a hybrid recommendation algorithm.