python-arrayclass
Pytorch
python-arrayclass | Pytorch | |
---|---|---|
1 | 349 | |
3 | 79,328 | |
- | 1.7% | |
5.6 | 10.0 | |
about 1 year ago | 6 days ago | |
Python | Python | |
MIT License | BSD 1-Clause License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
python-arrayclass
-
I created a package that lets you treat numpy arrays like dataclasses.
Get it here: https://github.com/Ivorforce/python-arrayclass
Pytorch
-
Mathematics secret behind AI on Digit Recognition
Hi everyone! I’m devloker, and today I’m excited to share a project I’ve been working on: a digit recognition system implemented using pure math functions in Python. This project aims to help beginners grasp the mathematics behind AI and digit recognition without relying on high-level libraries like TensorFlow or PyTorch. You can find the complete code on my GitHub repository.
-
Top 17 Fast-Growing Github Repo of 2024
PyTorch
-
AMD's MI300X Outperforms Nvidia's H100 for LLM Inference
> their own custom stack to interact with GPUs
lol completely made up.
are you conflating CUDA the platform with the C/C++ like language that people write into files that end with .cu? because while some people are indeed not writing .cu files, absolutely no one is skipping the rest of the "stack".
source: i work at one of these "mega corps". hell if you don't believe me go look at how many CUDA kernels pytorch has https://github.com/pytorch/pytorch/tree/main/aten/src/ATen/n....
> Everybody thinks it’s CUDA that makes Nvidia the dominant player.
it 100% does
-
Awesome List
PyTorch - An open source machine learning framework. PyTorch Tutorials - Tutorials and documentation.
-
Understanding GPT: How To Implement a Simple GPT Model with PyTorch
In this guide, we provided a comprehensive, step-by-step explanation of how to implement a simple GPT (Generative Pre-trained Transformer) model using PyTorch. We walked through the process of creating a custom dataset, building the GPT model, training it, and generating text. This hands-on implementation demonstrates the fundamental concepts behind the GPT architecture and serves as a foundation for more complex applications. By following this guide, you now have a basic understanding of how to create, train, and utilize a simple GPT model. This knowledge equips you to experiment with different configurations, larger datasets, and additional techniques to enhance the model's performance and capabilities. The principles and techniques covered here will help you apply transformer models to various NLP tasks, unlocking the potential of deep learning in natural language understanding and generation. The methodologies presented align with the advancements in transformer models introduced by Vaswani et al. (2017), emphasizing the power of self-attention mechanisms in processing sequences of data more effectively than traditional approaches (Vaswani et al., 2017). This understanding opens pathways to explore and innovate in the field of natural language processing using cutting-edge deep learning techniques (Kingma & Ba, 2015).
-
Building a Simple Chatbot using GPT model - part 2
PyTorch is a powerful and flexible deep learning framework that offers a rich set of features for building and training neural networks.
-
Clusters Are Cattle Until You Deploy Ingress
Oddly enough, sometimes, the best way to learn is by putting forth incorrect opinions or questions. Recently, while wrestling with AI project complexities, I pondered aloud whether all Docker images with AI models would inevitably be bulky due to PyTorch dependencies. To my surprise, this sparked many helpful responses, offering insights into optimizing image sizes. Being willing to be wrong opens up avenues for rapid learning.
-
Tinygrad 0.9.0
Tinygrad targets consumer hardware (to be precise, only Radeon 7900XTX and nothing else[1]), while ROCm does not actually provide good support for such hardware. For example, last release of hipBLASLt-6.1.1 library has deep integration with PyTorch[1], while working only on AMD Instinct hardware. And even for the professional hardware out there, the support period is ridiculous: AMD Instinct MI100 (2020) is not supported. Only 4 years and tens of thousands of dollars worth of hardware is going to the trash, yay!
And to be more precise, they still use some core libraries from ROCm stack[3], they just don't use all these fancy multi-gigabyte[4] hardware-limited rocBLAS/hipBLASlt/rocWMMA/rocRAND/etc. libraries.
[1] https://tinygrad.org/#tinybox
[2] https://github.com/pytorch/pytorch/issues/119081
[3] https://github.com/tinygrad/tinygrad/blob/v0.9.0/tinygrad/ru...
[4] https://repo.radeon.com/rocm/yum/6.1.1/main/
- PyTorch 2.3: User-Defined Triton Kernels, Tensor Parallelism in Distributed
-
Clasificador de imágenes con una red neuronal convolucional (CNN)
PyTorch (https://pytorch.org/)
What are some alternatives?
NumPy - The fundamental package for scientific computing with Python.
Flux.jl - Relax! Flux is the ML library that doesn't make you tensor
typedload - Python library to load dynamically typed data into statically typed data structures
mediapipe - Cross-platform, customizable ML solutions for live and streaming media.
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
flax - Flax is a neural network library for JAX that is designed for flexibility.
tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]
Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
Deep Java Library (DJL) - An Engine-Agnostic Deep Learning Framework in Java
tensorflow - An Open Source Machine Learning Framework for Everyone
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]