sentencepiece
dalle-mini
Our great sponsors
sentencepiece | dalle-mini | |
---|---|---|
19 | 3,446 | |
9,480 | 14,641 | |
4.6% | - | |
8.1 | 5.2 | |
16 days ago | 6 months ago | |
C++ | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sentencepiece
- sentencepiece
-
LLM.int8(): 8-Bit Matrix Multiplication for Transformers at Scale
you need to train the model on 1 trillion tokens (https://platform.openai.com/tokenizer https://github.com/google/sentencepiece) anyways for it to get reasoning capacities, which it feels very unlikely that your data is that much.
I'm highly skeptical that you have enough data to pretrain if you don't have enough data to fine tune.
fine tuning + vector search + prompting of as much stuff as you can, on a LLM like palm2 or gpt4 is what I would do. otherwise you can use falcon 40B ofc.
maybe I should charge for this ahah
-
[P] TokenMonster Ungreedy ~ 35% faster inference and 35% increased context-length for large language models (compared to tiktoken). Benchmarks included.
a) Comparison with SentencePiece tokenizer with comparable settings (It can also ignore word-boundaries and create phrase tokens)
-
LLaMA tokenizer: is a JavaScript implementation available anywhere?
LLaMA uses the sentencepiece tokenizer: https://github.com/google/sentencepiece
-
[P] New tokenization method improves LLM performance & context-length by 25%+
Besides, are you familiar with SentencePiece? What you are doing looks very similar (generate a large vocab, prune worst token until vocab size is reached), only the token selection criterion is different. It's also purely data driven in the sense that there are no assumption specific to language (and it can optionally segment across whitespace, as you are doing).
-
Code runs without definition of function (automatically calls a different function instead)
Hi, I'm studying the implementation of encode and decode functions for Google's SentencePiece tokenizer.
-
How to handle multiple languages in a sentence?
I think many LMs nowadays use unicode tokenizers, that are not tied to specific languages. E.g. sentencepiece is the most popular one: https://github.com/google/sentencepiece
- Large language models are having their Stable Diffusion moment
-
LLaMA-7B in Pure C++ with full Apple Silicon support
If you are interested in implementing LLaMA yourself or learning, I noticed that the reference code by Facebook is one of the cleaner, easier to read ML code I've seen in a while. https://github.com/facebookresearch/llama/blob/main/llama/mo... It's about 200 lines long. You probably do need a bit of knowledge to understand what you are reading but I was pleasantly surprised.
For example in comparison, StableDiffusion torch code in diffusers and transformers Python libraries has lots of conditionals, experiments etc. that are not being used that can make it hard to follow what is going on.
Last weekend I got the "main loop" of the transformer working in pure CPU Rust code, following the reference code. My crappy code is just very very slow as I focused on getting it to run, not making it fast. The tokenizer uses some Google thing https://github.com/google/sentencepiece but luckily for inference it seems that you just need to be able to parse the tokenizer model file and not understand how it was created; I was able to strip out the protobuf files from that repository and add it to Rust and read the tokens.
I am optimistic that someone makes a high quality CPU or some CPU+GPU+SSD combination thingmaling that will make it somewhat practical to run even the large LLM models without needing an A100 or two.
- ChatGPT in an iOS Shortcut – Worlds Smartest HomeKit Voice Assistant
dalle-mini
-
Mini-Gemini: Mining the Potential of Multi-Modality Vision Language Models
Mini-Gemini is a bit of a confusing name.
Reminds me of how DALL·E Mini came out three years ago and eventually had to rename itself to Craiyon https://github.com/borisdayma/dalle-mini
-
New Baby Kitten, what should i Name her?
I wouldnt consider Craiyon to be high tier equipment
-
Annual meatball harvest in southern Italy. Mamma mia. 👌🤌
Made with : https://www.craiyon.com/
- Taylor Swift holding up a novel and reading it aloud in a beautiful library while standing behind a lectern #craiyon
-
AI Eevee
AI Site
- Ai art The Thing
-
Never underestimate a droid: robots gather at AI for Good summit in Geneva
https://www.craiyon.com/ try it, and it's not even that good a text to image generator.
-
So simple, yet I can't get the prompt. Any Idea?
For example used just your prompt on Craiyon , guess you can try on replicate the free sd demos for example.
- Pedi para uma IA genérica fazer uma foto do Fernando Diniz, estes foram os resultados
- MADNESS AI ART
What are some alternatives?
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
CTranslate2 - Fast inference engine for Transformer models
dalle-2-preview
llama - Inference code for Llama models
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"
stable-diffusion - A latent text-to-image diffusion model
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
dalle-flow - 🌊 A Human-in-the-Loop workflow for creating HD images from text
OpenNMT-Tutorial - Neural Machine Translation (NMT) tutorial. Data preprocessing, model training, evaluation, and deployment.
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement