TensorFlow-object-detection-tutorial
cupy
TensorFlow-object-detection-tutorial | cupy | |
---|---|---|
1 | 25 | |
154 | 10,451 | |
0.0% | 1.5% | |
0.0 | 9.9 | |
about 5 years ago | 8 days ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
TensorFlow-object-detection-tutorial
-
Python CUDA - Multiprocessing/Pipes/Queues
I am trying to accomplish something similar to this with yolo5. https://github.com/pythonlessons/TensorFlow-object-detection-tutorial
cupy
-
Nvidia adds native Python support to CUDA
The plethora of packages, including DSLs for compute and MLIR.
https://developer.nvidia.com/how-to-cuda-python
https://cupy.dev/
- CuPy: NumPy and SciPy for GPU
-
NumPy 2.0.0
No.
You may want to check out cupy
https://cupy.dev/
-
Mojo: Ownership and lifetime checks deep dive with Chris Lattner [video]
I think I would agree with you. In my opinion, that already exists and is decently mature. CuPy [0] for Python and CUDA.jl [1] for Julia are both excellent ways to interface with GPU that don't require you to get into the nitty gritty of CUDA. Both do their best to keep you at the Array-level abstraction until you actually need to start writing kernels yourself and even then, it's pretty simple. They took a complete GPU novice like me and let me to write pretty performant kernels without having to ever touch raw CUDA.
[0] https://cupy.dev/
-
Keras 3.0
I did not expect anything interesting, but this is actually cool.
> A full implementation of the NumPy API. Not something "NumPy-like" — just literally the NumPy API, with the same functions and the same arguments.
I suppose it's like https://cupy.dev/
- Progress on No-GIL CPython
-
Fedora 40 Eyes Dropping Gnome X11 Session Support
What was the difference in runtime performance, and did you try CuPy?
https://github.com/cupy/cupy :
> CuPy is a NumPy/SciPy-compatible array library for GPU-accelerated computing with Python. CuPy acts as a drop-in replacement to run existing NumPy/SciPy code on NVIDIA CUDA or AMD ROCm platforms.
Projects using CuPy:
-
How does one optimize their functions?
It's more effort though. You will likely have to format your data in specific ways for the GPU to efficiently process it. I've done this kind of thing with PyTorch tensors, but there are also math-specific libraries like CuPy. If you only have millions, Numpy should be fine.
-
Speed Up Your Physics Simulations (250x Faster Than NumPy) Using PyTorch. Episode 1: The Boltzmann Distribution
I'd also recommend checking out CuPy which aims to fully re-implement the Numpy api for CUDA GPUs, while taking advantage of Nvidia's specialized libraries like cuBLAS, cuRAND, cuSOLVER etc. The tradeoff being that it only works with Nvidia GPUs.
What are some alternatives?
ImageAI - A python library built to empower developers to build applications and systems with self-contained Computer Vision capabilities
cupynumeric - NumPy and SciPy on Multi-Node Multi-GPU systems
pytorch2keras - PyTorch to Keras model convertor
Numba - NumPy aware dynamic Python compiler using LLVM
AnimeGANv2 - [Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime
scikit-cuda - Python interface to GPU-powered libraries