sentencepiece
langchain
Our great sponsors
sentencepiece | langchain | |
---|---|---|
19 | 152 | |
9,480 | 56,526 | |
4.6% | - | |
8.1 | 10.0 | |
16 days ago | 9 months ago | |
C++ | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sentencepiece
- sentencepiece
-
LLM.int8(): 8-Bit Matrix Multiplication for Transformers at Scale
you need to train the model on 1 trillion tokens (https://platform.openai.com/tokenizer https://github.com/google/sentencepiece) anyways for it to get reasoning capacities, which it feels very unlikely that your data is that much.
I'm highly skeptical that you have enough data to pretrain if you don't have enough data to fine tune.
fine tuning + vector search + prompting of as much stuff as you can, on a LLM like palm2 or gpt4 is what I would do. otherwise you can use falcon 40B ofc.
maybe I should charge for this ahah
-
[P] TokenMonster Ungreedy ~ 35% faster inference and 35% increased context-length for large language models (compared to tiktoken). Benchmarks included.
a) Comparison with SentencePiece tokenizer with comparable settings (It can also ignore word-boundaries and create phrase tokens)
-
LLaMA tokenizer: is a JavaScript implementation available anywhere?
LLaMA uses the sentencepiece tokenizer: https://github.com/google/sentencepiece
-
[P] New tokenization method improves LLM performance & context-length by 25%+
Besides, are you familiar with SentencePiece? What you are doing looks very similar (generate a large vocab, prune worst token until vocab size is reached), only the token selection criterion is different. It's also purely data driven in the sense that there are no assumption specific to language (and it can optionally segment across whitespace, as you are doing).
-
Code runs without definition of function (automatically calls a different function instead)
Hi, I'm studying the implementation of encode and decode functions for Google's SentencePiece tokenizer.
-
How to handle multiple languages in a sentence?
I think many LMs nowadays use unicode tokenizers, that are not tied to specific languages. E.g. sentencepiece is the most popular one: https://github.com/google/sentencepiece
- Large language models are having their Stable Diffusion moment
-
LLaMA-7B in Pure C++ with full Apple Silicon support
If you are interested in implementing LLaMA yourself or learning, I noticed that the reference code by Facebook is one of the cleaner, easier to read ML code I've seen in a while. https://github.com/facebookresearch/llama/blob/main/llama/mo... It's about 200 lines long. You probably do need a bit of knowledge to understand what you are reading but I was pleasantly surprised.
For example in comparison, StableDiffusion torch code in diffusers and transformers Python libraries has lots of conditionals, experiments etc. that are not being used that can make it hard to follow what is going on.
Last weekend I got the "main loop" of the transformer working in pure CPU Rust code, following the reference code. My crappy code is just very very slow as I focused on getting it to run, not making it fast. The tokenizer uses some Google thing https://github.com/google/sentencepiece but luckily for inference it seems that you just need to be able to parse the tokenizer model file and not understand how it was created; I was able to strip out the protobuf files from that repository and add it to Rust and read the tokens.
I am optimistic that someone makes a high quality CPU or some CPU+GPU+SSD combination thingmaling that will make it somewhat practical to run even the large LLM models without needing an A100 or two.
- ChatGPT in an iOS Shortcut – Worlds Smartest HomeKit Voice Assistant
langchain
-
🗣️🤖 Ask to your Neo4J knowledge base in NLP & get KPIs
Langchain and the implementation of Custom Tools also is a great (and very efficient) way to setup a dedicated Q&A (for example for chat purpose) agent.
- LangChain – Some quick, high level thoughts on improvements/changes
-
Claude 2 Internal API Client and CLI
We're using it via langchain talking to Amazon Bedrock which is hosting Claude 1.x. It's comparable to GPT3.x, not bad. The integration doesn't seem to be fully there though, I think langchain is expecting "Human:" and "AI:", but Claude uses "Assistant:".
https://github.com/hwchase17/langchain/issues/2638
-
Any better alternatives to fine-tuning GPT-3 yet to create a custom chatbot persona based on provided knowledge for others to use?
Depending on how much work you want to put into it, you can get started at HuggingFace with their models and datasets, but you'd need compute power, multiple MLOps, etc. I was introduced to the concept in this video, since Google has their Vertex AI tools on Google Cloud, and there's always LangChain but I'm not sure about anything recent.
-
langchain VS griptape - a user suggested alternative
2 projects | 11 Jul 20232 projects | 9 Jul 2023
-
Vector storage is coming to Meilisearch to empower search through AI
a documentation chatbot proof of concept using GPT3.5 and LangChain
-
ChatPDF: What ChatGPT Can't Do, This Can!
I encourage everyone to pay attention to the Langchain open-source project and leverage it to achieve tasks that ChatGPT cannot handle.
- LangChain Arbitrary Command Execution - CVE-2023-34541
-
Langchain Is Pointless
Yeah I never know where memory goes exactly in langchain, it's not exactly clear all the time. But sure, the main insight I remember is this, take a look at their MULTI_PROMPT_ROUTER_TEMPLATE: https://github.com/hwchase17/langchain/blob/560c4dfc98287da1...
It's a lot of instructions for an LLM, they seem to forget an LLM is an auto-completion machine, and which data it is trained on. Using <<>> for sections is not a normal thing, it's not markdown, which probably the thing read way more often on the internet, instead of open json comments, why not type signatures, instead of so many rules, why not give it examples? It is an autocomplete machine!
They are relying too much on the LLM being smart because they probably only test stuff in GPT-4 and 3.5, but with GPT4All models this prompt was not working at all, so I had to rewrite it, for simple routing, we don't even need json, carying the `next_inputs` here is weird if you don't need it.
So this is my version of it: https://gist.github.com/rogeriochaves/b67676977eebb1936b9b5c...
It's so basic it's dumb, yet it is more powerful, as it does not rely on GPT-4 level intelligence, it's just what I needed
What are some alternatives?
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
CTranslate2 - Fast inference engine for Transformer models
llama_index - LlamaIndex is a data framework for your LLM applications
llama - Inference code for Llama models
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]
OpenNMT-Tutorial - Neural Machine Translation (NMT) tutorial. Data preprocessing, model training, evaluation, and deployment.
AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.