Face Recognition
Pytorch
Face Recognition | Pytorch | |
---|---|---|
34 | 348 | |
52,167 | 79,328 | |
- | 1.7% | |
0.0 | 10.0 | |
16 days ago | 3 days ago | |
Python | Python | |
MIT License | BSD 1-Clause License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Face Recognition
-
Security Image Recognition
Camera connected to a PI? Something like this could run locally: https://github.com/ageitgey/face_recognition
-
Facial recognition software/API for face-blind teacher?
Have you tried this repo: github
- GitHub - ageitgey/face_recognition: The world's simplest facial recognition api for Python and the command line
- The simplest facial recognition API for Python
-
Every thing you need to know about Machine Learning Pipeline
One of the most common challenges is the black-box problem, when the pipeline becomes too complex to understand it would happen. This can make it difficult to identify issues with the system or to understand why it isn't working as we expected or make accurate predictions that saiwa company find out the solution for Face Recognition. Another challenge is the time required for organizations to deploy a machine learning model, which is increasing and make real-time computing difficult . To overcome these challenges, it's important to have an efficient and rigorous ML pipeline . ML level 0 involves a manual process with its own set of challenges, while ML level 1 involves ML pipeline automation and additional components . A well-defined machine learning pipeline can help to abstract the complex process into a series of steps, allowing each team to work independently on specific tasks such as data collection, data preparation, model training, model evaluation, and model deployment.
-
Reverse image search / facial recognition
Second link is an easy to implement python library is you want to build it yourself https://github.com/ageitgey/face_recognition
-
Made a easy to use face recognition library
It is similar to https://github.com/ageitgey/face_recognition, except that Ageitgey's cli only compares the first face found in the image to the first one found the the second.
-
Salisbury council meeting minutes addressing conspiracy theorist councillors
You'd have alot more luck with something like DLIB or an open source implementation such as: https://github.com/ageitgey/face_recognition
- Face comparison in Stable Diffusion
-
Understanding different Algorithms for Facial Recognition
To know more about face_recognition module https://github.com/ageitgey/face_recognition
Pytorch
-
Top 17 Fast-Growing Github Repo of 2024
PyTorch
-
AMD's MI300X Outperforms Nvidia's H100 for LLM Inference
> their own custom stack to interact with GPUs
lol completely made up.
are you conflating CUDA the platform with the C/C++ like language that people write into files that end with .cu? because while some people are indeed not writing .cu files, absolutely no one is skipping the rest of the "stack".
source: i work at one of these "mega corps". hell if you don't believe me go look at how many CUDA kernels pytorch has https://github.com/pytorch/pytorch/tree/main/aten/src/ATen/n....
> Everybody thinks it’s CUDA that makes Nvidia the dominant player.
it 100% does
-
Awesome List
PyTorch - An open source machine learning framework. PyTorch Tutorials - Tutorials and documentation.
-
Understanding GPT: How To Implement a Simple GPT Model with PyTorch
In this guide, we provided a comprehensive, step-by-step explanation of how to implement a simple GPT (Generative Pre-trained Transformer) model using PyTorch. We walked through the process of creating a custom dataset, building the GPT model, training it, and generating text. This hands-on implementation demonstrates the fundamental concepts behind the GPT architecture and serves as a foundation for more complex applications. By following this guide, you now have a basic understanding of how to create, train, and utilize a simple GPT model. This knowledge equips you to experiment with different configurations, larger datasets, and additional techniques to enhance the model's performance and capabilities. The principles and techniques covered here will help you apply transformer models to various NLP tasks, unlocking the potential of deep learning in natural language understanding and generation. The methodologies presented align with the advancements in transformer models introduced by Vaswani et al. (2017), emphasizing the power of self-attention mechanisms in processing sequences of data more effectively than traditional approaches (Vaswani et al., 2017). This understanding opens pathways to explore and innovate in the field of natural language processing using cutting-edge deep learning techniques (Kingma & Ba, 2015).
-
Building a Simple Chatbot using GPT model - part 2
PyTorch is a powerful and flexible deep learning framework that offers a rich set of features for building and training neural networks.
-
Clusters Are Cattle Until You Deploy Ingress
Oddly enough, sometimes, the best way to learn is by putting forth incorrect opinions or questions. Recently, while wrestling with AI project complexities, I pondered aloud whether all Docker images with AI models would inevitably be bulky due to PyTorch dependencies. To my surprise, this sparked many helpful responses, offering insights into optimizing image sizes. Being willing to be wrong opens up avenues for rapid learning.
-
Tinygrad 0.9.0
Tinygrad targets consumer hardware (to be precise, only Radeon 7900XTX and nothing else[1]), while ROCm does not actually provide good support for such hardware. For example, last release of hipBLASLt-6.1.1 library has deep integration with PyTorch[1], while working only on AMD Instinct hardware. And even for the professional hardware out there, the support period is ridiculous: AMD Instinct MI100 (2020) is not supported. Only 4 years and tens of thousands of dollars worth of hardware is going to the trash, yay!
And to be more precise, they still use some core libraries from ROCm stack[3], they just don't use all these fancy multi-gigabyte[4] hardware-limited rocBLAS/hipBLASlt/rocWMMA/rocRAND/etc. libraries.
[1] https://tinygrad.org/#tinybox
[2] https://github.com/pytorch/pytorch/issues/119081
[3] https://github.com/tinygrad/tinygrad/blob/v0.9.0/tinygrad/ru...
[4] https://repo.radeon.com/rocm/yum/6.1.1/main/
- PyTorch 2.3: User-Defined Triton Kernels, Tensor Parallelism in Distributed
-
Clasificador de imágenes con una red neuronal convolucional (CNN)
PyTorch (https://pytorch.org/)
-
AI enthusiasm #9 - A multilingual chatbot📣🈸
torch is a package to manage tensors and dynamic neural networks in python (GitHub)
What are some alternatives?
insightface - State-of-the-art 2D and 3D Face Analysis Project
Flux.jl - Relax! Flux is the ML library that doesn't make you tensor
CompreFace - Leading free and open-source face recognition system
mediapipe - Cross-platform, customizable ML solutions for live and streaming media.
Milvus - A cloud-native vector database, storage for next generation AI applications
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
OpenCV - Open Source Computer Vision Library
flax - Flax is a neural network library for JAX that is designed for flexibility.
tesseract-ocr - Tesseract Open Source OCR Engine (main repository)
tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]
Kornia - Geometric Computer Vision Library for Spatial AI
Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more