yolov7
hivemind
Our great sponsors
yolov7 | hivemind | |
---|---|---|
33 | 40 | |
12,681 | 1,833 | |
- | 2.7% | |
4.0 | 5.9 | |
9 days ago | 27 days ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
yolov7
- FLaNK Stack Weekly 16 October 2023
-
Train a ML model able to identify animal species
If you want something off-the-shelf, try YoloV7.
-
A video based Latin dictionary: get what you see in Latin (beta) - What do you think?
The current dictionary is still in a beta state and has only been trained on 80 words (e.g. 'man', 'dog', 'car', 'keyboard', 'book', etc.; see list of words, see dataset). I used the object detection model Yolov7 (paper, all credits to them).
-
[D] Extracting the class labels and bounding boxes for objects, from a YOLO7 model after converting to an ONNX model
(Please note, this is a re-post of my original question here, I think this subreddit might be more appropriate for asking this question)At work, we use Unity, we have a project that needs object detection and classification. We decided to use this YOLO7 model (for non-technical reasons, It had to be the exact same model as the company does have pre-trained weights for this exact model). However, Unity only supports ONNX so I exported the model as an ONNX model, using the code provided in the repo:
- Coding Question Help
-
DL for the Web: Repository of Models
Github Projects offering pretrained weights and train / run scripts. Example
- [OC] Football Player 3D Pose Estimation using YOLOv7 and Matplotlib
-
Finding a good Tiny Yolo to train in Python
The only project I found is this one that implements Yolov7
-
Visualizing image augmentations from YOLOV7
I'm wondering if there's an efficient way to visualize the image augmentations from the Yolov7 hyperparameters list here
-
Train YOLOv8 ObjectDetection on Custom Dataset Tutorial
yolov7: https://github.com/WongKinYiu/yolov7#performance
hivemind
-
You can now train a 70B language model at home
https://github.com/learning-at-home/hivemind is also relevant
-
Would anyone be interested in contributing to some group projects?
I really hope you'll join me, for the Petals support, at least! A single docker-compose.yml file is all we need, for now. If we are able to find enough people willing to host some smaller models, perhaps we could expand into the Hivemind, and create our own, custom foundation model one day?
- Hive mind:Train deep learning models on thousands of volunteers across the world
-
Could a model not be trained by a decentralized network? Like Seti @ home or kinda-sorta like bitcoin. Petals accomplishes this somewhat, but if raw computer power is the only barrier to open-source I'd be happy to try organizing decentalized computing efforts
Decentralized deep learning: https://github.com/learning-at-home/hivemind
-
Orca (built on llama13b) looks like the new sheriff in town
https://github.com/learning-at-home/hivemind - same people behind it, was made before petals I think.
-
Do you think that AI research will slow down to a halt because of regulation?
not if we rise to meet that challenge. here's a few tools that facilitate AI research in the face of an advanced persistent threat: Hivemind- a distributed Pytorch framework
-
LLM@home
yeah, there's Hivemind. and there's research wrt how to chunk out training workload so it can be scaled up. not sure why there's commentary that latency issues would limit this sort of enterprise, the architecture typically isn't designed for liveness. other subfields of distributed training/inference include zero-knowledge machine learning. besides all of that, there's also adversarial computation like SafetyNets and refereed delegation of computation.
-
[D] Google "We Have No Moat, And Neither Does OpenAI": Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI
We already have the software for it. There are some projects, but the one I'm most familiar with is https://github.com/learning-at-home/hivemind for training and it's sister project https://petals.ml/ for running large models distributed.
-
Run 100B+ language models at home, BitTorrent‑style
I'm not entirely how the approach they're using works [0], but I study federated learning and one of the highly-cited survey papers has several chapters (5 and 6 in particular) addressing potential attacks, failure modes, and bias [1].
0: https://github.com/learning-at-home/hivemind
1: https://arxiv.org/abs/1912.04977
-
SETI Home Is in Hibernation
The Hivemind project is just that
https://github.com/learning-at-home/hivemind
What are some alternatives?
yolov3 - YOLOv3 in PyTorch > ONNX > CoreML > TFLite
replika-research - Replika.ai Research Papers, Posters, Slides & Datasets
edgetpu - Coral issue tracker (and legacy Edge TPU API source)
alpa - Training and serving large-scale neural networks with auto parallelization.
edgetpu-yolo - Minimal-dependency Yolov5 export and inference demonstration for the Google Coral EdgeTPU
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
YOLOv4 - Port of YOLOv4 to C# + TensorFlow
Super-SloMo - PyTorch implementation of Super SloMo by Jiang et al.
darknet - Convolutional Neural Networks
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
XMem - [ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
HiveMind-core - Join the OVOS collective, utils for OpenVoiceOS mesh networking