tensorrtx
gpt-3
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tensorrtx
-
A Three-pronged Approach to Bringing ML Models Into Production
In terms of the latter, this is quite common when employing non-standard SOTA models. You may discover a variety of TensorRT implementations on the web if you want to use popular models—for example, in the project where we needed to train an object-detection algorithm on Rutorch and deploy it on Triton, we used many cases of PyTorch -> TensorRT -> Triton. The implementation of the model on TensoRT was taken from here. You may also be interested in this repository, as it contains many current implementations supported by developers.
-
Dall-E 2
I'll try them out. I have an RTX 2070, which apparently supports fp16. But it only has 8GB RAM.
I used the instructions here to check: https://github.com/wang-xinyu/tensorrtx/blob/master/tutorial...
-
Increasing usb cam FPS with Yolov5 on a Jetson Xavier NX?
Optimize your model using TensorRT. There is a good implementation here: https://github.com/wang-xinyu/tensorrtx/tree/master/yolov5
gpt-3
-
GPT4.5 or GPT5 being tested on LMSYS?
>I wasn't talking about "state of the art LLMs," I am aware that commercial offerings are much better trained in Spanish. This was a thought experiment based on comments from people testing GPT-3.5 with Swahili.
A thought experiment from other people comments on another language. So...No. Fabricating failure modes from their constructed ideas about how LLMs work seems to be a frustratingly common occurrence in these kinds of discussions.
>Frustratingly, just few months ago I read a paper describing how LLMs excessively rely on English-language representations of ideas, but now I can't find it.
Most LLMs are trained on English overwhelmingly. GPT-3 had a 92.6% English dataset. https://github.com/openai/gpt-3/blob/master/dataset_statisti...
That the models are as proficient as they are is evidence enough of knowledge transfer clearly happening. https://arxiv.org/abs/2108.13349. If you trained a model on the Catalan tokens GPT-3 was trained on alone, you'd just get a GPT-2 level gibberish model at best.
anyway. These are some interesting papers
How do languages influence each other? Studying cross-lingual data sharing during LLM fine-tuning - https://arxiv.org/pdf/2305.13286
Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer - https://arxiv.org/abs/2404.04042
Multilingual LLMs are Better Cross-lingual In-context Learners with Alignment - https://arxiv.org/abs/2305.05940
It's not like there is perfect transfer but the idea that there's none at all seemed so ridiculous to me (and why i asked the first question). Models would be utterly useless in multilingual settings if that were really the case.
-
What are LLMs? An intro into AI, models, tokens, parameters, weights, quantization and more
Large models: Everything above 10B of parameters. This is where Llama 3, Llama 2, Mistral 8x22B, GPT 3, and most likely GPT 4 sit.
-
Can ChatGPT improve my L2 grammar?
Are generative AI models useful for learning a language, and if so which languages? Over 90% of ChatGPT's training data was in English. The remaining 10% of data was split unevenly between 100+ languages. This suggests that the quality of the outputs will vary from language to language.
-
GPT4 Can’t Ace MIT
I have doubts it was extensively trained on German data. Who knows about GPT4, but GPT3 is ~92% of English and ~1.5% of German, which means it saw more "die, motherfucker, die" than on "die Mutter".
(https://github.com/openai/gpt-3/blob/master/dataset_statisti...)
- Necesito ayuda.
-
[R] PaLM 2 Technical Report
Catalan was 0.018 % of GPT-3's training corpus. https://github.com/openai/gpt-3/blob/master/dataset_statistics/languages_by_word_count.csv.
- I'm seriously concerned that if I lost ChatGPT-4 I would be handicapped
- The responses I got from bard after asking why 100 times… he was pissed 😂
-
BharatGPT: India's Own ChatGPT
>Certainly it is pleasing that they are not just doing Hindi, but some of these languages must be represented online by a very small corpus of text indeed. I wonder how effectively an LLM can be trained on such a small training set for any given language?
as long as it's not the main language it doesn't really matter. Besides English(92.6%), the biggest language by representation (word count) is taken up by french at 1.8%. Most of the languages GPT-3 knows are sitting at <0.2% representation.
https://github.com/openai/gpt-3/blob/master/dataset_statisti...
Competence in the main language will bleed into the rest.
- GPT-4 gets a B on Scott Aaronson's quantum computing final exam
What are some alternatives?
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
dalle-mini - DALL·E Mini - Generate images from a text prompt
tensorflow-yolov4-tflite - YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.0, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite
DALL-E - PyTorch package for the discrete VAE used for DALL·E.
v-diffusion-pytorch - v objective diffusion inference code for PyTorch.
DALLE-mtf - Open-AI's DALL-E for large scale training in mesh-tensorflow.
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
dalle-2-preview
SegmentationCpp - A c++ trainable semantic segmentation library based on libtorch (pytorch c++). Backbone: VGG, ResNet, ResNext. Architecture: FPN, U-Net, PAN, LinkNet, PSPNet, DeepLab-V3, DeepLab-V3+ by now.