bevy_retro
sentencepiece
bevy_retro | sentencepiece | |
---|---|---|
5 | 19 | |
294 | 9,520 | |
0.3% | 2.1% | |
4.0 | 8.1 | |
7 months ago | 5 days ago | |
Rust | C++ | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bevy_retro
-
Many reasons to always read the LICENSE
https://github.com/katharostech/bevy_retrograde/blob/master/LICENSE.md section 6.2
- Katharostech/Bevy_retrograde Licensing Issue
-
Dall-E 2
There are programming projects[1] out there that use licenses to prevent people from using projects in ways the authors don't agree with. You could also argue that GPL does the same thing (prevents people from using/distributing the software in the way they would like).
Whether you consider it moral doesn't seem relevant, only to respect the wishes of the author of such programs.
[1] https://github.com/katharostech/bevy_retrograde/blob/master/...
-
yo guys, if you are an indie developer and you buy some 3d models what do you have to do with the licenses? do I have to include them in the game or something?
A "copyleft" licence such as GPL requires any software that uses it to also be licensed under GPL or equivalent. Then there's more esoteric licences such as Katharos.
-
5 stages of cargo
speaking of, check the lisence on this bad boy: https://github.com/katharostech/bevy_retro
sentencepiece
- sentencepiece
-
LLM.int8(): 8-Bit Matrix Multiplication for Transformers at Scale
you need to train the model on 1 trillion tokens (https://platform.openai.com/tokenizer https://github.com/google/sentencepiece) anyways for it to get reasoning capacities, which it feels very unlikely that your data is that much.
I'm highly skeptical that you have enough data to pretrain if you don't have enough data to fine tune.
fine tuning + vector search + prompting of as much stuff as you can, on a LLM like palm2 or gpt4 is what I would do. otherwise you can use falcon 40B ofc.
maybe I should charge for this ahah
-
[P] TokenMonster Ungreedy ~ 35% faster inference and 35% increased context-length for large language models (compared to tiktoken). Benchmarks included.
a) Comparison with SentencePiece tokenizer with comparable settings (It can also ignore word-boundaries and create phrase tokens)
-
LLaMA tokenizer: is a JavaScript implementation available anywhere?
LLaMA uses the sentencepiece tokenizer: https://github.com/google/sentencepiece
-
[P] New tokenization method improves LLM performance & context-length by 25%+
Besides, are you familiar with SentencePiece? What you are doing looks very similar (generate a large vocab, prune worst token until vocab size is reached), only the token selection criterion is different. It's also purely data driven in the sense that there are no assumption specific to language (and it can optionally segment across whitespace, as you are doing).
-
Code runs without definition of function (automatically calls a different function instead)
Hi, I'm studying the implementation of encode and decode functions for Google's SentencePiece tokenizer.
-
How to handle multiple languages in a sentence?
I think many LMs nowadays use unicode tokenizers, that are not tied to specific languages. E.g. sentencepiece is the most popular one: https://github.com/google/sentencepiece
- Large language models are having their Stable Diffusion moment
-
LLaMA-7B in Pure C++ with full Apple Silicon support
If you are interested in implementing LLaMA yourself or learning, I noticed that the reference code by Facebook is one of the cleaner, easier to read ML code I've seen in a while. https://github.com/facebookresearch/llama/blob/main/llama/mo... It's about 200 lines long. You probably do need a bit of knowledge to understand what you are reading but I was pleasantly surprised.
For example in comparison, StableDiffusion torch code in diffusers and transformers Python libraries has lots of conditionals, experiments etc. that are not being used that can make it hard to follow what is going on.
Last weekend I got the "main loop" of the transformer working in pure CPU Rust code, following the reference code. My crappy code is just very very slow as I focused on getting it to run, not making it fast. The tokenizer uses some Google thing https://github.com/google/sentencepiece but luckily for inference it seems that you just need to be able to parse the tokenizer model file and not understand how it was created; I was able to strip out the protobuf files from that repository and add it to Rust and read the tokens.
I am optimistic that someone makes a high quality CPU or some CPU+GPU+SSD combination thingmaling that will make it somewhat practical to run even the large LLM models without needing an A100 or two.
- ChatGPT in an iOS Shortcut – Worlds Smartest HomeKit Voice Assistant
What are some alternatives?
rend3 - Easy to use, customizable, efficient 3D renderer library built on wgpu.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
bevy_tilemap - Tilemap with chunks for the Bevy game engine.
CTranslate2 - Fast inference engine for Transformer models
bevy - A refreshingly simple data-driven game engine built in Rust
llama - Inference code for Llama models
dalle-2-preview
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"
gpt-3 - GPT-3: Language Models are Few-Shot Learners
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
community-events - Place where folks can contribute to 🤗 community events
OpenNMT-Tutorial - Neural Machine Translation (NMT) tutorial. Data preprocessing, model training, evaluation, and deployment.