Pytorch
huggingface_hub
Pytorch | huggingface_hub | |
---|---|---|
349 | 104 | |
79,328 | 1,787 | |
1.7% | 6.3% | |
10.0 | 9.6 | |
5 days ago | 4 days ago | |
Python | Python | |
BSD 1-Clause License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Pytorch
-
Mathematics secret behind AI on Digit Recognition
Hi everyone! I’m devloker, and today I’m excited to share a project I’ve been working on: a digit recognition system implemented using pure math functions in Python. This project aims to help beginners grasp the mathematics behind AI and digit recognition without relying on high-level libraries like TensorFlow or PyTorch. You can find the complete code on my GitHub repository.
-
Top 17 Fast-Growing Github Repo of 2024
PyTorch
-
AMD's MI300X Outperforms Nvidia's H100 for LLM Inference
> their own custom stack to interact with GPUs
lol completely made up.
are you conflating CUDA the platform with the C/C++ like language that people write into files that end with .cu? because while some people are indeed not writing .cu files, absolutely no one is skipping the rest of the "stack".
source: i work at one of these "mega corps". hell if you don't believe me go look at how many CUDA kernels pytorch has https://github.com/pytorch/pytorch/tree/main/aten/src/ATen/n....
> Everybody thinks it’s CUDA that makes Nvidia the dominant player.
it 100% does
-
Awesome List
PyTorch - An open source machine learning framework. PyTorch Tutorials - Tutorials and documentation.
-
Understanding GPT: How To Implement a Simple GPT Model with PyTorch
In this guide, we provided a comprehensive, step-by-step explanation of how to implement a simple GPT (Generative Pre-trained Transformer) model using PyTorch. We walked through the process of creating a custom dataset, building the GPT model, training it, and generating text. This hands-on implementation demonstrates the fundamental concepts behind the GPT architecture and serves as a foundation for more complex applications. By following this guide, you now have a basic understanding of how to create, train, and utilize a simple GPT model. This knowledge equips you to experiment with different configurations, larger datasets, and additional techniques to enhance the model's performance and capabilities. The principles and techniques covered here will help you apply transformer models to various NLP tasks, unlocking the potential of deep learning in natural language understanding and generation. The methodologies presented align with the advancements in transformer models introduced by Vaswani et al. (2017), emphasizing the power of self-attention mechanisms in processing sequences of data more effectively than traditional approaches (Vaswani et al., 2017). This understanding opens pathways to explore and innovate in the field of natural language processing using cutting-edge deep learning techniques (Kingma & Ba, 2015).
-
Building a Simple Chatbot using GPT model - part 2
PyTorch is a powerful and flexible deep learning framework that offers a rich set of features for building and training neural networks.
-
Clusters Are Cattle Until You Deploy Ingress
Oddly enough, sometimes, the best way to learn is by putting forth incorrect opinions or questions. Recently, while wrestling with AI project complexities, I pondered aloud whether all Docker images with AI models would inevitably be bulky due to PyTorch dependencies. To my surprise, this sparked many helpful responses, offering insights into optimizing image sizes. Being willing to be wrong opens up avenues for rapid learning.
-
Tinygrad 0.9.0
Tinygrad targets consumer hardware (to be precise, only Radeon 7900XTX and nothing else[1]), while ROCm does not actually provide good support for such hardware. For example, last release of hipBLASLt-6.1.1 library has deep integration with PyTorch[1], while working only on AMD Instinct hardware. And even for the professional hardware out there, the support period is ridiculous: AMD Instinct MI100 (2020) is not supported. Only 4 years and tens of thousands of dollars worth of hardware is going to the trash, yay!
And to be more precise, they still use some core libraries from ROCm stack[3], they just don't use all these fancy multi-gigabyte[4] hardware-limited rocBLAS/hipBLASlt/rocWMMA/rocRAND/etc. libraries.
[1] https://tinygrad.org/#tinybox
[2] https://github.com/pytorch/pytorch/issues/119081
[3] https://github.com/tinygrad/tinygrad/blob/v0.9.0/tinygrad/ru...
[4] https://repo.radeon.com/rocm/yum/6.1.1/main/
- PyTorch 2.3: User-Defined Triton Kernels, Tensor Parallelism in Distributed
-
Clasificador de imágenes con una red neuronal convolucional (CNN)
PyTorch (https://pytorch.org/)
huggingface_hub
-
OpenAI's employees were given two explanations for why Sam Altman was fired
Something to think about:
https://github.com/huggingface/huggingface_hub
- Thoughts on a "Text Generation CivitAI"
-
Civitai alternatives.
Yes! We have a well documented Python library (https://github.com/huggingface/huggingface_hub) and public endpoints (https://huggingface.co/docs/hub/api#endpoints-table) you can use to retrieve information about the models and potentially build UIs with specific use cases in mind
-
Fox Fairy @ Diffusion Forest: Unreal Engine + Stable Diffusion
i think if you search for pixel art here there are some models worth checking out: https://huggingface.co/
- ASK HN: AI is really exciting but where do I start?
- j'ai entraîné une IA à générer Éric Duhaime en clown !
-
[Guide] DreamBooth Training with ShivamShrirao's Repo on Windows Locally
I received another error saying OSError: We couldn't connect to 'https://huggingface.co' to load this model, couldn't find it in the cached files and it looks like ./vae is not the path to a directory containing a file named diffusion_pytorch_model.bin
-
Training a Deep Learning Language Model for Latin text Generation
I plan to release it on https://huggingface.co/, where all this cool AI stuff is available for free for everyone that wishes to try it.
-
Image Upscaling Models Compared (General, Photo and Faces)
For this I used mainly the chainner application with models from here but I also used the google colab automatic1111 stable diffusion webui (for example for Lanczos) and also spaces fromhuggingface like this one or then from the replicate.com website super resolution collection.
-
2D Illustration Styles are scarce on Stable Diffusion so i created a dreambooth model inspired by Hollie Mengert's work
you will now need to create a huggingface account ( https://huggingface.co/) if you haven't already. When you have, go here and accept the terms, https://huggingface.co/runwayml/stable-diffusion-v1-5. When you have done both, click on your profile icon and go to settings. Click access tokens and then create token, name it whatever you want, select "write". When you are finished with all this, then you can run the next cell which is the hugging face cell. It will ask for a token, you copy and paste what you just created.
What are some alternatives?
Flux.jl - Relax! Flux is the ML library that doesn't make you tensor
civitai - A repository of models, textual inversions, and more
mediapipe - Cross-platform, customizable ML solutions for live and streaming media.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
flax - Flax is a neural network library for JAX that is designed for flexibility.
mammography_metarepository - Meta-repository of screening mammography classifiers
tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]
KoboldAI-Client
Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models