tinygrad
tensorflow_macos
DISCONTINUED
Our great sponsors
tinygrad | tensorflow_macos | |
---|---|---|
17 | 33 | |
23,232 | 2,887 | |
4.8% | - | |
9.9 | 3.4 | |
3 days ago | almost 3 years ago | |
Python | Shell | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tinygrad
-
AMD Unveils Ryzen 8000G Series Processors: Zen 4 APUs for Desktop with Ryzen AI
Not sure if I completely understand what "Ryzen AI" does, but Tinygrad for example has some limited support for RDNA3[0]. It isn't quite there yet in matters of performance though, as you can read in the comments of that file.
There's also a small tutorial by AMD on how to use the WMMA intrinsic[1] using AMD's hipcc[2] compiler. Documentation is sparse kinda sparse, but the instruction set is not huge. The RDNA3 ISA guide[3] might also be helpful (and only a fraction of the pages are relevant.)
0. https://github.com/tinygrad/tinygrad/blob/master/extra/gemm/...
1. https://gpuopen.com/learn/wmma_on_rdna3/
2. https://github.com/ROCm/HIPCC
3. https://www.amd.com/content/dam/amd/en/documents/radeon-tech...
-
Beyond Backpropagation - Higher Order, Forward and Reverse-mode Automatic Differentiation for Tensorken
This post describes how I added automatic differentiation to Tensorken. Tensorken is my attempt to build a fully featured yet easy-to-understand and hackable implementation of a deep learning library in Rust. It takes inspiration from the likes of PyTorch, Tinygrad, and JAX.
-
[D] What is a good way to maintain code readability and code quality while scaling up complexity in libraries like Hugging Face?
what do you think about tinygrad? I think its a good example of growing and well written, (partially) well documented library with many close to reference implementations
-
💻 7 Open-Source DevTools That Save Time You Didn't Know to Exist ⌛🚀
🌟 Support on GitHub Website: https://tinygrad.org/
-
Decomposing Language Models into Understandable Components
Try to get something like tinygrad[1] running locally, that way you can tweak things a bit run it again and see how it performs. While doing this you'll pick up most of the concepts and get a feeling of how things work. Also, take a look at projects like llama.cpp[2], you don't have to fully understand what's going on here, tho.
You may need some intermediate knowledge of linear algebra and this thing called "data science" nowadays, which is pretty much knowing how to mangle data and visualize it.
Try creating a small model on your own, it doesn't have to be super fancy just make sure it does something you want it to do. And then ... you'll probably could go on your own then.
- Stable Diffusion in pure C/C++
-
There is no hard takeoff
lol, you should see me bash my own code. I'm even more mean.
https://github.com/tinygrad/tinygrad/blob/master/examples/hl...
have a bunch of bounties on it, we're getting 94%+ now! mostly not me who wrote this, see history. have to switch to float16 and add Winograd convs still. we have a branch with multigpu too.
goal is to beat an A100 in speed on a tinybox.
-
MatX: Faster Chips for LLMs
AMD drivers are a higher priority but he also made tinygrad https://github.com/tinygrad/tinygrad
-
[Project] Whisper Implementation in Rust using burn
I temporarily switched from Rust to Python for machine learning, but quickly became fed up with Python's annoying versioning issues and runtime errors. I looked for a better path to machine learning and discovered burn, a deep learning framework for Rust. As my first burn project I decided to port OpenAI's Whisper transcription model. The project can be found at Gadersd/whisper-burn: A Rust implementation of OpenAI's Whisper model using the burn framework (github.com). I based it on the excellently concise tinygrad implementation that can be found here. The tinygrad version begrudgingly uses Torch's stft which I ported into a pure Rust short time Fourier transform along with the mel scale frequency conversion matrix function because I am curious and just a bit masochistic.
-
Onnx Runtime: “Cross-Platform Accelerated Machine Learning”
Be better to use https://github.com/tinygrad/tinygrad as an intermediary framework?
tensorflow_macos
-
Updated Apple Silicon Guide for M2 Pro and M2 Max Chips
https://github.com/apple/tensorflow_macos is no longer needed
-
The hunt for the M1’s neural engine
Tensorflow has a CoreML enabled version which run on ANE.
-
Main PyTorch maintainer confirms that work is being done to support Apple Silicon GPU acceleration for the popular machine learning framework.
Apple did some work to optimize tensorflow for M1, can be found here https://github.com/apple/tensorflow_macos It's alpha, but works fine, I tried it
-
Apple M1 support for TensorFlow 2.5 pluggable device API
I was able to install this fairly easily (much more so then the crap they dumped out here - https://github.com/apple/tensorflow_macos. Just take a look at the 200 github issues that were ignored for the most part...)
I also noticed that in my project I got a decent speedup immediately when executing my model, but I have not run any benchmarks.
But, where do you go to file bugs? Ask questions? etc. I am not a big Mac developer, so is there something I don't know?
-
[D] M1 MacBooks versus Google Colab for deep learning
https://github.com/apple/tensorflow_macos you mean this one?
So the question I have now is which one is faster/better suited for my puropses. M1 got [hyped](https://machinelearning.apple.com/updates/ml-compute-training-on-mac) a lot so I thought the M1 would savage my desktop (and acutally the hype biased my purchase decision), but well its only slightly better (like 1.2-1.5x faster in my cifar10 benchmark) and I wonder if its worth the effective 1-2 GB of RAM left on MacOS vs the \~14 GB on my Linux machine. Further there is Colab and I can't really tell which one will win the race, since Colab limits resources by demand but also allows distributed fit on cloud TPUs, which would introduce some extra coding efforts. Then again I have to say: so does ML on Apple Silicon, which comes with [a handful of limitations](https://github.com/apple/tensorflow_macos#additional-information), a [peculiar MiniConda setup](https://github.com/apple/tensorflow_macos/issues/153), a [lot of issues](https://github.com/apple/tensorflow_macos/issues) (also severe ones, like training errors etc., problems which I would not even recognize) which are actually not really being worked on.
-
Terminal killing my command to initialize conda for Miniforge3
Following this site's instructions, I tried multiple ways of downloading and installing Miniforge, including Homebrew, CI pipeline, and by downloading the shell files from here.
-
Cerebras’ New Monster AI Chip Adds 1.4T Transistors
You might be interested in this for your M1 MBA: https://github.com/apple/tensorflow_macos
-
Hey Rustaceans! Got an easy question? Ask here (16/2021)!
I did find this thoguh: https://github.com/apple/tensorflow_macos Take this with a grain of salt though. I don't own a M1 (although I do save up for a new laptop and am thinking about it :) ).
What are some alternatives?
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
miniforge - A conda-forge distribution.
jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
llama.cpp - LLM inference in C/C++
Pointnet_Pointnet2_pytorch - PointNet and PointNet++ implemented by pytorch (pure python) and on ModelNet, ShapeNet and S3DIS.
tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
flamegraph - Easy flamegraphs for Rust projects and everything else, without Perl or pipes <3
Python-docker - Docker Official Image packaging for Python
coremltools - Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
RAFT
llama - Inference code for Llama models