edgetpu-yolo
transformers
Our great sponsors
edgetpu-yolo | transformers | |
---|---|---|
2 | 175 | |
81 | 124,557 | |
- | 2.7% | |
2.6 | 10.0 | |
8 days ago | 6 days ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
edgetpu-yolo
- YOLOv6: Redefine state-of-the-art for object detection
-
A microcontroller board with a camera, mic, and Coral Edge TPU
I'm on the fence. It's a very nice device if you can get your models working on it - basically untouched at the price/power point. Drivers for me have been OK. I have an M.2 card connected to a Jetson devkit (makes for a nice embedded test bench) and it runs fine, no worse than the NCS for setup anyway. There were a couple of PCI settings to tweak but I documented the setup here [0]. For common use cases it's a decent option, I think. For custom models you really need to know what you're doing.
The main issue I've had is that the compiler behaviour differs between versions (and it's very difficult to find older releases), so where previously you could run a big model and delegate things to the CPU, now it sometimes won't compile at all. There were also problems where we trained a model in AutoML - using free credits but the real cost would have been over $100 - but edgetpu compiled model lost a lot of performance. The developers have been very helpful when I've contacted them, and generally you can get through to real devs (not generic support) who can look at your model for you. Mostly I think you need to take care when training models for these devices, but quantisation-aware training is not trivial to use in Tensorflow and there are only a few off-the-shelf models which are supported in the various toolkits. Model maker looks promising, but it's also finnicky in my experience [1].
I'm not super worried about hardware availability. They're suffering from the chip shortage like everyone else, so it's not surprising that lead times are long. I was able to buy my device in late 2020 without any trouble.
[0] https://github.com/jveitchmichaelis/edgetpu-yolo/blob/main/h...
transformers
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the βtraxβ repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
The HuggingFace transformers library already has support for a similar method called prompt lookup decoding that uses the existing context to generate an ngram model: https://github.com/huggingface/transformers/issues/27722
I don't think it would be that hard to switch it out for a pretrained ngram model.
-
AI enthusiasm #6 - Finetune any LLM you wantπ‘
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please β€οΈ
-
Schedule-Free Learning β A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore β 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
Fail to reproduce the same evaluation metrics score during inference.
I am aware that using mixed precision reduces the stability of weight and there will be little consistency but don't expect it to be this much. I have attached the graph of evaluation metrics. If someone can give me some insight into this issue, that would be great.
What are some alternatives?
yolov7 - Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
frigate - NVR with realtime local object detection for IP cameras
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
yolov7_d2 - π₯π₯π₯π₯ (Earlier YOLOv7 not official one) YOLO with Transformers and Instance Segmentation, with TensorRT acceleration! π₯π₯π₯
llama - Inference code for Llama models
YOLOv6 - YOLOv6: a single-stage object detection framework dedicated to industrial applications.
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
PixelLib - Visit PixelLib's official documentation https://pixellib.readthedocs.io/en/latest/
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
CATNet - π°οΈ Learning to Aggregate Multi-Scale Context for Instance Segmentation in Remote Sensing Images (TNNLS 2023)
huggingface_hub - The official Python client for the Huggingface Hub.