yolov7
NUWA
Our great sponsors
yolov7 | NUWA | |
---|---|---|
33 | 23 | |
12,715 | 2,794 | |
- | 0.4% | |
3.2 | 3.3 | |
12 days ago | 11 months ago | |
Jupyter Notebook | ||
GNU General Public License v3.0 only | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
yolov7
- FLaNK Stack Weekly 16 October 2023
-
Train a ML model able to identify animal species
If you want something off-the-shelf, try YoloV7.
-
A video based Latin dictionary: get what you see in Latin (beta) - What do you think?
The current dictionary is still in a beta state and has only been trained on 80 words (e.g. 'man', 'dog', 'car', 'keyboard', 'book', etc.; see list of words, see dataset). I used the object detection model Yolov7 (paper, all credits to them).
-
[D] Extracting the class labels and bounding boxes for objects, from a YOLO7 model after converting to an ONNX model
(Please note, this is a re-post of my original question here, I think this subreddit might be more appropriate for asking this question)At work, we use Unity, we have a project that needs object detection and classification. We decided to use this YOLO7 model (for non-technical reasons, It had to be the exact same model as the company does have pre-trained weights for this exact model). However, Unity only supports ONNX so I exported the model as an ONNX model, using the code provided in the repo:
- Coding Question Help
-
DL for the Web: Repository of Models
Github Projects offering pretrained weights and train / run scripts. Example
- [OC] Football Player 3D Pose Estimation using YOLOv7 and Matplotlib
-
Finding a good Tiny Yolo to train in Python
The only project I found is this one that implements Yolov7
-
Visualizing image augmentations from YOLOV7
I'm wondering if there's an efficient way to visualize the image augmentations from the Yolov7 hyperparameters list here
-
Train YOLOv8 ObjectDetection on Custom Dataset Tutorial
yolov7: https://github.com/WongKinYiu/yolov7#performance
NUWA
-
How long until we can create full length movies in ai ?
Github: https://github.com/microsoft/NUWA/tree/main/assets/nuwa_infinity/animation
-
[R] NUWA-Infinity, the first paper working on infinite visual synthesis!
Code for https://arxiv.org/abs/2207.09814 found: https://github.com/microsoft/NUWA
- [D] Most Popular AI Research July 2022 pt. 2 - Ranked Based On GitHub Stars
- Most Popular AI Research July 2022 pt. 2 - Ranked Based On GitHub Stars
-
I'm building a timeline for generative image ML models. What's missing?
Microsoft NUWA: https://github.com/microsoft/NUWA
- NUWA Infinity
-
With so many new Text to Image "AI" emerging lately, is it not crazy to speculate about Text to Video?
Microsoft NUWA
-
Have any researchers in the field discussed anything about the prospect of 'text-to-video' - something that's a bit like DALL-E 2, but with a video as the finished output?
NÜWA from Microsoft.
- Art Student here. So about Dalle 2, am I in trouble or should I continue on with my studies? Moreover, what do you think the future holds in store for specific artists (ie comics as opposed to freelance writers as opposed to animators etc) in light of this announcement?
-
Imagine this: complete "fake AI people" are coming, and you didn't even see this coming!
P.S., Lucidrains remade it! AND he's adding an audio transformer to it tomorrow he says! But he needs feedback and someone to train it, I don't think there is enough resources helping this project's training. You can reach him through: https://github.com/microsoft/NUWA
What are some alternatives?
yolov3 - YOLOv3 in PyTorch > ONNX > CoreML > TFLite
CogVideo - Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"
edgetpu - Coral issue tracker (and legacy Edge TPU API source)
DALLE2-video - Direct application of DALLE-2 to video synthesis, using factored space-time Unet and Transformers
edgetpu-yolo - Minimal-dependency Yolov5 export and inference demonstration for the Google Coral EdgeTPU
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
YOLOv4 - Port of YOLOv4 to C# + TensorFlow
min-dalle - min(DALL·E) is a fast, minimal port of DALL·E Mini to PyTorch
darknet - Convolutional Neural Networks
XMem - [ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
Cream - This is a collection of our NAS and Vision Transformer work. [Moved to: https://github.com/microsoft/AutoML]